Tuesday, 3 December 2024

java code with proto and Gradle

 

Kafka Protobuf Consumer

This project demonstrates how to consume messages from a Kafka topic using a Java application. The messages are serialized using Protocol Buffers (Protobuf) and deserialized in the application.

Prerequisites

  1. Java Development Kit (JDK): Install JDK 11 or later.
  2. Apache Kafka: Ensure you have access to a Kafka cluster.
  3. Protocol Buffers Compiler (protoc):

bash

 

protoc --version

  1. Maven or Gradle: Install Maven or Gradle to manage project dependencies.

Setup Instructions

Step 1: Clone or Download the Project

bash

 

git clone <repository-url>

cd KafkaProtobufConsumer


Step 2: Compile Protobuf File

  1. Place your .proto file (e.g., PPP_Cloud_Streaming.proto) in the src/main/proto directory.
  2. Use protoc to generate Java files:

bash

 

protoc --java_out=src/main/java src/main/proto/PPP_Cloud_Streaming.proto

  1. The generated Java files will be placed in the src/main/java directory under the specified package.

Step 3: Configure Project Dependencies

Maven

Add the following dependencies to your pom.xml:

xml

 

<dependencies>

    <!-- Kafka Client -->

    <dependency>

        <groupId>org.apache.kafka</groupId>

        <artifactId>kafka-clients</artifactId>

        <version>3.5.1</version>

    </dependency>

    <!-- Protocol Buffers -->

    <dependency>

        <groupId>com.google.protobuf</groupId>

        <artifactId>protobuf-java</artifactId>

        <version>3.24.0</version>

    </dependency>

</dependencies>

Gradle

Add the following dependencies to your build.gradle file:

gradle

 

dependencies {

    implementation 'org.apache.kafka:kafka-clients:3.5.1'

    implementation 'com.google.protobuf:protobuf-java:3.24.0'

}


Step 4: Configure Kafka Properties

Update the Kafka properties in the Java code as per your setup:

  • Bootstrap servers: Replace with your Kafka brokers.
  • Authentication: Configure SASL_PLAINTEXT, username, and password.

Example:

java

 

props.setProperty("bootstrap.servers", "broker.lt.use1.:9094");

props.setProperty("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"SCRAM\" password=\"password\";");


Step 5: Build the Project

Maven

bash

 

mvn clean install

Gradle

bash

 

gradle build


Step 6: Run the Application

Using Maven:

bash

 

mvn exec:java -Dexec.mainClass="KafkaProtobufConsumer"

Using Gradle:

bash

 

gradle run


Expected Output

  1. The application connects to the Kafka topic specified in the code.
  2. It continuously polls messages from the topic.
  3. For each message, it deserializes the Protobuf payload and prints the details.

Example output:

css

 

Received message: { sourceTime: 123456789, tradeId: "T12345" }

Received message: { sourceTime: 987654321, tradeId: "T54321" }


Troubleshooting

  1. Error: protoc not found
    • Ensure protoc is installed and available in your PATH.
    • Verify with protoc --version.
  2. Dependency errors:
    • Run mvn dependency:resolve (Maven) or gradle dependencies (Gradle) to verify dependencies.
  3. Connection issues:
    • Check Kafka broker details and ensure your system has network access to the brokers.
  4. Protobuf deserialization fails:
    • Ensure the .proto file used to generate the Java classes matches the producer's .proto.

 

Installing Gradle

 

Installing Gradle

On Linux (Ubuntu/Debian)

  1. Install Gradle using APT:

bash

 

sudo apt update

sudo apt install gradle

  1. Verify Installation:

bash

 

gradle -v

  1. Manual Installation (Optional):
    • Download Gradle from the Gradle Download Page.
    • Extract the archive:

bash

 

unzip gradle-X.X.X-bin.zip

    • Move it to /opt:

bash

 

sudo mv gradle-X.X.X /opt/gradle

    • Add Gradle to your PATH:

bash

 

echo 'export PATH=/opt/gradle/bin:$PATH' >> ~/.bashrc

source ~/.bashrc


On Windows

  1. Download Gradle:
    • Visit the Gradle download page and download the binary .zip file.
  2. Extract Gradle:
    • Extract the archive to a directory (e.g., C:\Program Files\Gradle).
  3. Set Environment Variables:
    • Go to Control Panel > System > Advanced System Settings > Environment Variables.
    • Add a new system variable:
      • Name: GRADLE_HOME
      • Value: C:\Program Files\Gradle
    • Edit the Path variable and add:

plaintext

 

C:\Program Files\Gradle\bin

  1. Verify Installation: Open a new terminal and run:

bash

 

gradle -v


On macOS

  1. Install Gradle with Homebrew:

bash

 

brew install gradle

  1. Verify Installation:

bash

 

gradle -v

 

Monday, 25 November 2024

Check that when my python code is producing data, which network interface it is using

import psutil

from scapy.all import sniff, IP, TCP, raw

import logging

 

# Logging setup

logging.basicConfig(

    filename="network_interface_usage.log",  # Log file

    level=logging.INFO,

    format="%(asctime)s - %(message)s",

    datefmt="%Y-%m-%d %H:%M:%S"

)

 

def log_and_print(message):

    """Logs the message to a file and prints it to the console."""

    print(message)

    logging.info(message)

 

def get_network_interface():

    """Returns a list of network interfaces and their IP addresses."""

    interfaces = psutil.net_if_addrs()

    interface_info = {}

    for interface, addrs in interfaces.items():

        for addr in addrs:

            if addr.family == psutil.AF_INET:  # Filter for IPv4 addresses

                interface_info[interface] = addr.address

    return interface_info

 

def packet_callback(packet):

    """Callback function to process captured packets."""

    if IP in packet and TCP in packet:

        ip_src = packet[IP].src

        ip_dst = packet[IP].dst

        tcp_sport = packet[TCP].sport

        tcp_dport = packet[TCP].dport

 

        # Log packet details

        log_and_print(f"Packet from {ip_src}:{tcp_sport} -> {ip_dst}:{tcp_dport}")

        log_and_print(f"  Raw Packet Data: {raw(packet).hex()}")

        log_and_print("-" * 50)

 

def capture_packets(interface):

    """Start sniffing packets on a specific interface."""

    log_and_print(f"Starting packet capture on {interface}...")

    sniff(iface=interface, prn=packet_callback, store=False)

 

def main():

    # Get all network interfaces and IPs

    interfaces = get_network_interface()

    log_and_print("Detected Network Interfaces and IPs:")

    for interface, ip in interfaces.items():

        log_and_print(f"Interface: {interface} - IP: {ip}")

 

    # Capture packets on each interface (you can choose one based on your setup)

    for interface in interfaces.keys():

        capture_packets(interface)

 

if __name__ == "__main__":

    main()


Sunday, 3 November 2024

ClodFormation, terraform and CDK example

AWSTemplateFormatVersion: "2010-09-09"

Description: "CloudFormation Template for VPC Endpoints and Route 53 with VPC and Subnets as parameters."

 

Parameters:

  SelectedRegion:

    Description: "Select the region for deployment (us-east-1 or ap-east-1)."

    Type: String

    AllowedValues:

      - us-east-1

      - ap-east-1

    Default: us-east-1

 

  VPCID:

    Description: "The VPC ID where the resources will be deployed."

    Type: String

 

  Subnet1ID:

    Description: "The ID of the first subnet."

    Type: String

 

  Subnet2ID:

    Description: "The ID of the second subnet."

    Type: String

 

  Subnet3ID:

    Description: "The ID of the third subnet."

    Type: String

 

Resources:

  # VPC Endpoints

  VPCEndpointS3:

    Type: AWS::EC2::VPCEndpoint

    Properties:

      VpcId: !Ref VPCID

      ServiceName: !Sub "com.amazonaws.${SelectedRegion}.s3"

      VpcEndpointType: Gateway

 

  VPCEndpointEC2:

    Type: AWS::EC2::VPCEndpoint

    Properties:

      VpcId: !Ref VPCID

      ServiceName: !Sub "com.amazonaws.${SelectedRegion}.ec2"

      VpcEndpointType: Interface

      SubnetIds:

        - !Ref Subnet1ID

        - !Ref Subnet2ID

        - !Ref Subnet3ID

 

  VPCEndpointSSM:

    Type: AWS::EC2::VPCEndpoint

    Properties:

      VpcId: !Ref VPCID

      ServiceName: !Sub "com.amazonaws.${SelectedRegion}.ssm"

      VpcEndpointType: Interface

      SubnetIds:

        - !Ref Subnet1ID

        - !Ref Subnet2ID

        - !Ref Subnet3ID

 

  VPCEndpointSecretsManager:

    Type: AWS::EC2::VPCEndpoint

    Properties:

      VpcId: !Ref VPCID

      ServiceName: !Sub "com.amazonaws.${SelectedRegion}.secretsmanager"

      VpcEndpointType: Interface

      SubnetIds:

        - !Ref Subnet1ID

        - !Ref Subnet2ID

        - !Ref Subnet3ID

 

  VPCEndpointCloudWatchLogs:

    Type: AWS::EC2::VPCEndpoint

    Properties:

      VpcId: !Ref VPCID

      ServiceName: !Sub "com.amazonaws.${SelectedRegion}.logs"

      VpcEndpointType: Interface

      SubnetIds:

        - !Ref Subnet1ID

        - !Ref Subnet2ID

        - !Ref Subnet3ID

 

  # Route 53 Hosted Zone

  Route53HostedZone:

    Type: AWS::Route53::HostedZone

    Properties:

      Name: !Sub "example-${SelectedRegion}.com"

 

  # Route 53 Record Set

  Route53RecordSet:

    Type: AWS::Route53::RecordSet

    Properties:

      HostedZoneId: !Ref Route53HostedZone

      Name: "app.example.com."

      Type: A

      AliasTarget:

        DNSName: !Sub "vpce.${SelectedRegion}.amazonaws.com"

        HostedZoneId: !GetAtt Route53HostedZone.Id

 

Outputs:

  SelectedRegionOutput:

    Description: "The selected region."

    Value: !Ref SelectedRegion

 

  VPCIDOutput:

    Description: "The VPC ID used for this deployment."

    Value: !Ref VPCID

 

  Subnet1IDOutput:

    Description: "The Subnet ID for Subnet 1."

    Value: !Ref Subnet1ID

 

  Subnet2IDOutput:

    Description: "The Subnet ID for Subnet 2."

    Value: !Ref Subnet2ID

 

  Subnet3IDOutput:

    Description: "The Subnet ID for Subnet 3."

    Value: !Ref Subnet3ID

 

  HostedZoneIDOutput:

    Description: "The Route 53 Hosted Zone ID."

    Value: !Ref Route53HostedZone

Sunday, 27 October 2024

Python virtual environment in Windows and copying it to a server

1. Set Up a Python Virtual Environment on Windows (CMD Compatible)

  1. Install Python (if not already installed).
  2. Open Command Prompt and Navigate to Your Project Directory:

cmd

 cd path\to\your\project

  1. Create a Virtual Environment:

cmd

 python -m venv venv

  1. Activate the Virtual Environment:

In CMD, use:

cmd

 venv\Scripts\activate

If using PowerShell, the command would be slightly different:

powershell

 .\venv\Scripts\Activate.ps1

  1. Install Dependencies:

cmd

 pip install -r requirements.txt

2. Copy the Virtual Environment to the Server

Since Windows CMD does not support native tar and scp commands, you’ll need some workarounds:

  1. Compress the Virtual Environment Using a Tool Like 7-Zip:
    • Right-click on the venv folder and compress it into a .zip file using 7-Zip or a similar tool.
    • Name the file venv.zip.
  2. Transfer the Archive to the Server:

Use an FTP client (e.g., FileZilla) or, if you have installed the Windows Subsystem for Linux (WSL), you can use scp in a WSL terminal:

bash

 scp venv.zip user@server_ip:/path/to/server/directory

  1. Decompress on the Server:

Log into your server and navigate to the directory where you copied venv.zip, then unzip it:

bash

 unzip venv.zip

  1. Activate the Virtual Environment on the Server:

bash

 source /path/to/server/directory/venv/bin/activate

  1. Verify Dependencies:

Run pip freeze to confirm all required packages are present and install any missing ones if needed.

 


Saturday, 19 October 2024

 How to install rpk and test latency? 


Since you have Redpanda installed on AWS Linux, you can proceed with the steps mentioned earlier to test latency and throughput directly on your AWS setup. Here's a more tailored guide for your environment:

1. Install rpk on AWS Linux (if not already installed)

To ensure that you have rpk (Redpanda's CLI), you can install it by running:

bash

 

curl -LO https://packages.vectorized.io/rpk/ubuntu_20.04/amd64/latest/rpk.tar.gz

tar -xzvf rpk.tar.gz

sudo mv rpk /usr/local/bin/

Ensure that Redpanda is running before proceeding with tests:

bash

 

sudo systemctl start redpanda

2. Testing Throughput on AWS Linux

a) Using rpk

Test producer and consumer throughput directly:

  • Producer Throughput Test:

bash

 

rpk topic produce --brokers localhost:9092 --key test-key --value test-value -n 10000 --rate 500

    • -n 10000: Send 10,000 messages.
    • --rate 500: Produce at 500 messages per second.
  • Consumer Throughput Test: Consume messages and observe processing rates:

bash

 

rpk topic consume test-topic --offset oldest --num 10000

This will consume 10,000 messages from the topic and provide throughput results.

b) Using Kafka Tools (if needed)

If you have Kafka tools installed, you can use them for detailed throughput benchmarking.

  • Producer Throughput (Kafka):

bash

 

kafka-producer-perf-test.sh \

    --topic test-topic \

    --num-records 100000 \

    --record-size 1024 \

    --throughput -1 \

    --producer-props bootstrap.servers=localhost:9092

  • Consumer Throughput (Kafka):

bash

 

kafka-consumer-perf-test.sh \

    --broker-list localhost:9092 \

    --topic test-topic \

    --messages 100000

This will consume 100,000 messages from the topic and report the throughput.

3. Testing Latency on AWS Linux

a) Using rpk

For latency, you can use rpk to measure how fast Redpanda is processing your messages.

  • Producer Latency:

bash

 

rpk topic produce --brokers localhost:9092 --key test-key --value test-value -n 10000 --latency

This will measure the time it takes to deliver each message to the broker.

  • End-to-End Latency:
    1. Produce 10,000 messages:

bash

 

rpk topic produce test-topic -n 10000 --rate 100 --value "Message with latency test"

    1. At the same time, consume the messages:

bash

 

rpk topic consume test-topic --offset oldest

By comparing timestamps of message production and consumption, you can calculate the end-to-end latency.

4. Benchmarking Multiple Brokers (if applicable)

If your Redpanda cluster has multiple brokers, you can test performance across the cluster by producing to and consuming from multiple brokers.

bash

 

rpk topic produce --brokers broker1:9092,broker2:9092 --topic test-topic -n 100000

This will stress-test Redpanda across brokers in AWS and provide insights into scalability and throughput.


You can monitor system performance (CPU, memory, disk I/O) during the tests using AWS CloudWatch or the built-in top and htop tools.

Let me know if you'd like further optimization tips or a specific setup for large-scale tests!


 Test latency and throughput in Redpanda using Redpanda CLI (rpk):


To test latency and throughput in Redpanda, follow these steps. You'll use tools like rpk (Redpanda’s CLI) or existing Kafka benchmarking tools (since Redpanda is Kafka API-compatible). Below are approaches to measure both metrics effectively:

Prerequisites

  1. Install Redpanda:
    • Follow the installation instructions from Redpanda’s official website to set up a Redpanda cluster (single-node or multi-node).
  2. Install Redpanda CLI (rpk):
    • rpk (Redpanda's CLI) is essential for running benchmarks and managing the cluster. Install it as per the official instructions:

            Install RedPanda 

            apt install redpanda

  1. Kafka-compatible Tools:
    • Since Redpanda is compatible with Kafka, tools like Kafka Producer Performance (kafka-producer-perf-test.sh) and Kafka Consumer Performance (kafka-consumer-perf-test.sh) can be used.

1. Testing Throughput

Throughput measures the rate of data transfer in terms of messages per second or megabytes per second.

a) Using rpk to Measure Throughput

  • rpk has built-in benchmarking capabilities to test the producer and consumer throughput.
  • Producer Throughput Test: You can generate test data and measure the throughput of producing messages to a Redpanda topic.

bash

 rpk topic produce --brokers localhost:9092 --key test-key --value test-value -n 10000 --rate 500

Here:

    • --brokers: The address of your Redpanda broker.
    • -n 10000: Number of messages to send.
    • --rate 500: Send messages at a rate of 500 messages per second.

  • Consumer Throughput Test: Consume messages from a topic to measure how fast consumers can process them.

bash

 rpk topic consume test-topic --offset oldest --num 10000

This will consume 10,000 messages and show you the processing speed.

b) Using Kafka Performance Test Scripts

If you want to simulate heavy traffic and measure throughput:

  • Producer Throughput (Kafka):

bash

 

kafka-producer-perf-test.sh \

    --topic test-topic \

    --num-records 100000 \

    --record-size 1024 \

    --throughput -1 \

    --producer-props bootstrap.servers=localhost:9092

Here:

    • --num-records 100000: Sends 100,000 messages.
    • --record-size 1024: Each message is 1024 bytes.
    • --throughput -1: No limit on throughput (send as fast as possible).
    • --producer-props: Kafka producer properties, including the Redpanda broker address.

  • Consumer Throughput (Kafka):

bash

 

kafka-consumer-perf-test.sh \

    --broker-list localhost:9092 \

    --topic test-topic \

    --messages 100000

This will consume 100,000 messages from the topic and provide throughput results.


2. Testing Latency

Latency measures the time taken to deliver a message from producer to consumer.

a) Using rpk to Measure Latency

To test the latency of messages, you can produce and consume messages while observing the latency of message delivery.

  • Producer Latency Test: Measure the time it takes for each message to be produced:

bash

 rpk topic produce --brokers localhost:9092 --key test-key --value test-value -n 10000 --latency

This command will measure the time each message takes to be delivered to the broker.

  • End-to-End Latency Test: You can measure end-to-end latency by producing and consuming messages in real-time. This is done by observing the time when a message is produced and when it's consumed.
    • Produce messages to a topic:

bash

 rpk topic produce test-topic -n 10000 --rate 100 --value "Message with latency test"

    • At the same time, start a consumer:

bash

 rpk topic consume test-topic --offset oldest

  • Compare the timestamps of when messages were produced and when they were consumed.

b) Using Kafka Tools

To perform a detailed latency test using Kafka’s producer performance tool, you can look at how long it takes to acknowledge a sent message.

  • Producer Latency (Kafka):

bash

 

kafka-producer-perf-test.sh \

    --topic test-topic \

    --num-records 10000 \

    --record-size 1024 \

    --throughput 500 \

    --producer-props bootstrap.servers=localhost:9092 \

    --print-metrics

    • --print-metrics: This will print out detailed producer metrics, including message send latency.

3. Benchmarking with Multiple Brokers

If you're using a multi-node Redpanda cluster, you can stress-test the system by producing/consuming from multiple nodes.

  • Modify the --brokers argument to list all the brokers in your Redpanda cluster:

bash

 

rpk topic produce --brokers broker1:9092,broker2:9092 --topic test-topic -n 100000

This helps to measure latency and throughput across multiple brokers in a real-world distributed setup.


4. Monitoring Performance Metrics

  • rpk metrics: Use rpk to observe performance and resource usage metrics in real-time.

bash

 

rpk cluster info

rpk metrics stream

This gives you detailed statistics like message throughput, disk usage, and network metrics.


5. Cloud-Based Testing

If you're testing Redpanda in a cloud environment, consider using monitoring solutions like Prometheus and Grafana to track latency, throughput, and system metrics (CPU, memory, disk I/O) during the test.

Conclusion:

  • Throughput can be measured using rpk or Kafka’s producer/consumer performance scripts by stressing the cluster with a high volume of messages and measuring message rates.
  • Latency can be measured using tools like rpk to observe end-to-end message delivery times or producer acknowledgment times.

Make sure to run tests in a production-like environment to get accurate insights into how Redpanda performs under load.


Thursday, 8 August 2024

Copy 5 Linux users along with their SSH keys from one EC2 instance to another

 

To copy 5 Linux users along with their SSH keys from one EC2 instance to another, follow these steps:


1. Prepare the New EC2 Instance

  • Ensure you have root or sudo access to the new EC2 instance.
  • SSH into the new EC2 instance.

2. Export User Details and Keys from the Source Instance

  1. Backup User Information: Run the following commands on the source instance to gather details for the required users (ram1, ra2, ke3, rt4, and yu6):

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    grep "^${user}:" /etc/passwd >> users_info.txt

    grep "^${user}:" /etc/shadow >> shadow_info.txt

    sudo tar czf ${user}_ssh_keys.tar.gz /home/${user}/.ssh

done

    • users_info.txt: Contains the /etc/passwd entries.
    • shadow_info.txt: Contains the /etc/shadow entries.
    • ${user}_ssh_keys.tar.gz: Archive of .ssh directories.
  1. Transfer the Backup Files to the New Instance: Use scp to copy these files to the new EC2 instance:

bash

 

scp users_info.txt shadow_info.txt *_ssh_keys.tar.gz ec2-user@<new-ec2-ip>:/tmp/


3. Import Users and Keys to the New Instance

  1. Recreate User Accounts: On the new EC2 instance, run:

bash

 

sudo bash -c 'cat /tmp/users_info.txt >> /etc/passwd'

sudo bash -c 'cat /tmp/shadow_info.txt >> /etc/shadow'

This will add the users to the system.

  1. Restore SSH Keys: For each user, extract and place the .ssh directory in the correct location:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar xzf /tmp/${user}_ssh_keys.tar.gz -C /home/${user}/

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


4. Test SSH Access

  • Ensure that each user can log in using their existing SSH key.
  • Verify permissions:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


5. Cleanup (Optional)

  • Remove temporary files:

bash

 

rm -f /tmp/users_info.txt /tmp/shadow_info.txt /tmp/*_ssh_keys.tar.gz

This approach ensures that user data, permissions, and SSH keys are properly copied and set up on the new instance.

 

_______________________________________________________

 

If the SSH keys are RSA keys (e.g., stored in .ssh/authorized_keys for each user), the process remains mostly the same. RSA keys are typically stored in a user's ~/.ssh/authorized_keys file. Here's how you can specifically manage RSA keys during the migration.


1. Verify RSA Key Setup on the Source Instance

  • On the source instance, check each user's ~/.ssh/authorized_keys file to confirm the presence of RSA keys:

bash

 

cat /home/ram1/.ssh/authorized_keys

Ensure this file contains the RSA public key (lines starting with ssh-rsa).


2. Export User Details and RSA Keys

Run the following commands to back up only the user details and RSA key files:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    grep "^${user}:" /etc/passwd >> users_info.txt

    grep "^${user}:" /etc/shadow >> shadow_info.txt

    sudo tar czf ${user}_ssh_rsa_keys.tar.gz /home/${user}/.ssh/authorized_keys

done


3. Transfer Backup Files to the New Instance

Use scp to transfer the backup files to the new EC2 instance:

bash

 

scp users_info.txt shadow_info.txt *_ssh_rsa_keys.tar.gz ec2-user@<new-ec2-ip>:/tmp/


4. Recreate Users and Restore RSA Keys

On the new EC2 instance:

  1. Add User Accounts:

bash

 

sudo bash -c 'cat /tmp/users_info.txt >> /etc/passwd'

sudo bash -c 'cat /tmp/shadow_info.txt >> /etc/shadow'

  1. Restore RSA Keys: Extract and set up the RSA keys for each user:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo mkdir -p /home/${user}/.ssh

    sudo tar xzf /tmp/${user}_ssh_rsa_keys.tar.gz -C /home/${user}/.ssh/ --strip-components=2

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


5. Test RSA Key Access

  • Ensure that users can log in using their RSA keys.
  • Check permissions for the .ssh directory and authorized_keys file:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


6. Optional: Validate SSH Keys

If you want to verify that the keys match on the new instance:

  • Compare the ~/.ssh/authorized_keys file on both instances for each user:

bash

 

diff /home/ram1/.ssh/authorized_keys <(ssh ec2-user@<new-ec2-ip> "cat /home/ram1/.ssh/authorized_keys")

This ensures the RSA keys are correctly migrated.

 

Do I have to create user in new EC2?

 

 

Yes, you need to create the users (ram1, ra2, ke3, rt4, yu6) on the new EC2 instance because user accounts are not automatically transferred between systems. Each Linux user must exist on the new system to ensure their home directories and permissions are correctly assigned. Here's how you can do it:


Steps to Create Users in the New EC2

  1. Create the Users with the Same UID and GID (Recommended)
    • Check the user IDs (UID) and group IDs (GID) on the source instance using:

bash

 

grep "^ram1:" /etc/passwd

Example output:

ruby

 

ram1:x:1001:1001:Ram User:/home/ram1:/bin/bash

    • Use the same UID and GID when creating the users on the new EC2 instance:

bash

 

sudo useradd -u 1001 -g 1001 -m -s /bin/bash ram1

sudo useradd -u 1002 -g 1002 -m -s /bin/bash ra2

sudo useradd -u 1003 -g 1003 -m -s /bin/bash ke3

sudo useradd -u 1004 -g 1004 -m -s /bin/bash rt4

sudo useradd -u 1005 -g 1005 -m -s /bin/bash yu6

    • If you don't need to preserve UID and GID, you can simply run:

bash

 

sudo useradd -m -s /bin/bash ram1

sudo useradd -m -s /bin/bash ra2

sudo useradd -m -s /bin/bash ke3

sudo useradd -m -s /bin/bash rt4

sudo useradd -m -s /bin/bash yu6

  1. Set Home Directory Permissions: Ensure each user owns their home directory:

bash

 

sudo chown -R ram1:ram1 /home/ram1

sudo chown -R ra2:ra2 /home/ra2

sudo chown -R ke3:ke3 /home/ke3

sudo chown -R rt4:rt4 /home/rt4

sudo chown -R yu6:yu6 /home/yu6


Copy SSH Keys to the New EC2

After creating the users, transfer their SSH keys as follows:

  1. Transfer the .ssh/authorized_keys Files: On the source instance, archive the .ssh directories for these users:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar czf ${user}_ssh.tar.gz -C /home/${user}/ .ssh

done

Use scp to copy these archives to the new EC2 instance:

bash

 

scp *_ssh.tar.gz ec2-user@<new-ec2-ip>:/tmp/

  1. Extract and Set Permissions on the New EC2: On the new EC2 instance, restore the .ssh directories:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo mkdir -p /home/${user}/.ssh

    sudo tar xzf /tmp/${user}_ssh.tar.gz -C /home/${user}/

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


Test and Verify

  • Verify users can log in using their SSH keys:

bash

 

ssh -i <private-key> ram1@<new-ec2-ip>

  • Ensure .ssh directory and file permissions are correct:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


Optional: Automate User Creation and Key Setup

You can write a script to automate these steps if you need to repeat this process for multiple EC2 instances. Let me know if you'd like help creating a script!

 

 

 

automation script for Steps to Create Users in the New EC2

 

 

Here’s an automation script to create users in a new EC2 instance, set up their home directories, and copy their SSH keys. Save this script as setup_users.sh and execute it on the new EC2 instance.

Create Users Ec2

Steps to Use the Script

  1. Prepare SSH Key Archives: Ensure SSH key archives (<username>_ssh.tar.gz) for each user are transferred to /tmp on the new EC2 instance.
  2. Run the Script:
    • Save the script as setup_users.sh on the new EC2 instance.
    • Make it executable:

bash

 

chmod +x setup_users.sh

    • Execute the script:

bash

 

sudo ./setup_users.sh

  1. Verify Setup:
    • Ensure the users are created by listing them:

bash

 

cat /etc/passwd | grep -E 'ram1|ra2|ke3|rt4|yu6'

    • Check the .ssh/authorized_keys files for each user.

This script automates the process of creating users, preserving UIDs/GIDs, and setting up RSA keys. Let me know if you need further modifications!

 

 

If you done need - remove UIDs and GIDS

 

 

 

I've removed the UID and GID references from your script. Let me know if you need any further modifications!

 

I am little confuse. give me step by step with autmation to migrate users with RSA to new EC2 instance using posible automation

 

 

Here is a step-by-step guide for automating the migration of users with their RSA SSH keys to a new EC2 instance, utilizing the provided script.


Step 1: Prepare the Source System

  1. Archive SSH Keys for Each User:
    • On the source system, archive the .ssh directory of each user:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar czf /tmp/${user}_ssh.tar.gz -C /home/${user} .ssh

done

    • This creates compressed .tar.gz files in /tmp.
  1. Transfer Archives to the New EC2 Instance:
    • Use scp to copy the .tar.gz files to the /tmp directory on the new EC2 instance:

bash

 

scp /tmp/*_ssh.tar.gz ec2-user@<new-ec2-ip>:/tmp/


Step 2: Execute the Automation Script on the New EC2 Instance

  1. Copy the Script to the New EC2 Instance: Save the provided script (setup_users.sh) on your local machine and copy it to the new EC2 instance:

bash

 

scp setup_users.sh ec2-user@<new-ec2-ip>:/home/ec2-user/

  1. Log in to the New EC2 Instance:

bash

 

ssh -i <private-key> ec2-user@<new-ec2-ip>

  1. Make the Script Executable: Once logged in, set executable permissions for the script:

bash

 

chmod +x /home/ec2-user/setup_users.sh

  1. Run the Script: Execute the script with sudo to create users and restore their SSH keys:

bash

 

sudo /home/ec2-user/setup_users.sh


Step 3: Verify the Setup

  1. Check Users: Ensure the users are created:

bash

 

cat /etc/passwd | grep -E 'ram1|ra2|ke3|rt4|yu6'

  1. Verify SSH Key Setup: Confirm each user has their .ssh/authorized_keys file:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    echo "Checking SSH keys for $user"

    sudo ls -l /home/$user/.ssh/authorized_keys

done

  1. Test SSH Access: Test logging in for one of the migrated users:

bash

 

ssh -i <private-key> ram1@<new-ec2-ip>


How the Script Works

  1. User Creation:
    • Checks if the user exists using id -u. If not, creates the user with useradd.
  2. Restore SSH Keys:
    • Looks for the corresponding .tar.gz archive in /tmp.
    • Extracts the archive into the user’s .ssh directory.
    • Sets proper ownership and permissions:
      • .ssh: 700
      • authorized_keys: 600
  3. Validation:
    • Confirms the user and SSH key setup at the end.

Step 4: Cleanup

  1. Remove the archives from /tmp on the new EC2 instance:

bash

 

sudo rm -f /tmp/*_ssh.tar.gz


This step-by-step guide ensures that your users and their SSH keys are successfully migrated to the new EC2 instance. Let me know if you need further clarification!

 

 

setup_users.sh

#!/bin/bash

 

# List of users to create

USERS=("ram1" "ra2" "ke3" "rt4" "yu6")

 

# Path to SSH key archives (assumes archives are already transferred to /tmp)

SSH_KEY_ARCHIVE_PATH="/tmp"

 

# Check if running as root

if [ "$EUID" -ne 0 ]; then

  echo "Please run as root or use sudo."

  exit 1

fi

 

# Create users and set up SSH keys

for USERNAME in "${USERS[@]}"; do

  echo "Creating user: $USERNAME"

 

  # Create user if it doesn't already exist

  if ! id -u "$USERNAME" > /dev/null 2>&1; then

    useradd -m -s /bin/bash "$USERNAME"

  else

    echo "User $USERNAME already exists. Skipping creation."

  fi

 

  # Restore SSH keys

  SSH_ARCHIVE="$SSH_KEY_ARCHIVE_PATH/${USERNAME}_ssh.tar.gz"

  if [ -f "$SSH_ARCHIVE" ]; then

    echo "Restoring SSH keys for $USERNAME from $SSH_ARCHIVE"

    mkdir -p "/home/$USERNAME/.ssh"

    tar xzf "$SSH_ARCHIVE" -C "/home/$USERNAME/.ssh" --strip-components=2

    chown -R "$USERNAME:$USERNAME" "/home/$USERNAME/.ssh"

    chmod 700 "/home/$USERNAME/.ssh"

    chmod 600 "/home/$USERNAME/.ssh/authorized_keys"

  else

    echo "SSH archive for $USERNAME not found at $SSH_ARCHIVE. Skipping key restoration."

  fi

 

done

 

# Verify setup

echo "User creation and SSH key setup complete. Verifying setup..."

for USERNAME in "${USERS[@]}"; do

  if id -u "$USERNAME" > /dev/null 2>&1; then

    echo "User $USERNAME exists. Home directory: $(eval echo ~$USERNAME)"

    if [ -f "/home/$USERNAME/.ssh/authorized_keys" ]; then

      echo "SSH keys for $USERNAME are set up."

    else

      echo "No SSH keys found for $USERNAME."

    fi

  else

    echo "User $USERNAME does not exist."

  fi

 

done