Thursday, 8 August 2024

Copy 5 Linux users along with their SSH keys from one EC2 instance to another

 

To copy 5 Linux users along with their SSH keys from one EC2 instance to another, follow these steps:


1. Prepare the New EC2 Instance

  • Ensure you have root or sudo access to the new EC2 instance.
  • SSH into the new EC2 instance.

2. Export User Details and Keys from the Source Instance

  1. Backup User Information: Run the following commands on the source instance to gather details for the required users (ram1, ra2, ke3, rt4, and yu6):

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    grep "^${user}:" /etc/passwd >> users_info.txt

    grep "^${user}:" /etc/shadow >> shadow_info.txt

    sudo tar czf ${user}_ssh_keys.tar.gz /home/${user}/.ssh

done

    • users_info.txt: Contains the /etc/passwd entries.
    • shadow_info.txt: Contains the /etc/shadow entries.
    • ${user}_ssh_keys.tar.gz: Archive of .ssh directories.
  1. Transfer the Backup Files to the New Instance: Use scp to copy these files to the new EC2 instance:

bash

 

scp users_info.txt shadow_info.txt *_ssh_keys.tar.gz ec2-user@<new-ec2-ip>:/tmp/


3. Import Users and Keys to the New Instance

  1. Recreate User Accounts: On the new EC2 instance, run:

bash

 

sudo bash -c 'cat /tmp/users_info.txt >> /etc/passwd'

sudo bash -c 'cat /tmp/shadow_info.txt >> /etc/shadow'

This will add the users to the system.

  1. Restore SSH Keys: For each user, extract and place the .ssh directory in the correct location:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar xzf /tmp/${user}_ssh_keys.tar.gz -C /home/${user}/

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


4. Test SSH Access

  • Ensure that each user can log in using their existing SSH key.
  • Verify permissions:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


5. Cleanup (Optional)

  • Remove temporary files:

bash

 

rm -f /tmp/users_info.txt /tmp/shadow_info.txt /tmp/*_ssh_keys.tar.gz

This approach ensures that user data, permissions, and SSH keys are properly copied and set up on the new instance.

 

_______________________________________________________

 

If the SSH keys are RSA keys (e.g., stored in .ssh/authorized_keys for each user), the process remains mostly the same. RSA keys are typically stored in a user's ~/.ssh/authorized_keys file. Here's how you can specifically manage RSA keys during the migration.


1. Verify RSA Key Setup on the Source Instance

  • On the source instance, check each user's ~/.ssh/authorized_keys file to confirm the presence of RSA keys:

bash

 

cat /home/ram1/.ssh/authorized_keys

Ensure this file contains the RSA public key (lines starting with ssh-rsa).


2. Export User Details and RSA Keys

Run the following commands to back up only the user details and RSA key files:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    grep "^${user}:" /etc/passwd >> users_info.txt

    grep "^${user}:" /etc/shadow >> shadow_info.txt

    sudo tar czf ${user}_ssh_rsa_keys.tar.gz /home/${user}/.ssh/authorized_keys

done


3. Transfer Backup Files to the New Instance

Use scp to transfer the backup files to the new EC2 instance:

bash

 

scp users_info.txt shadow_info.txt *_ssh_rsa_keys.tar.gz ec2-user@<new-ec2-ip>:/tmp/


4. Recreate Users and Restore RSA Keys

On the new EC2 instance:

  1. Add User Accounts:

bash

 

sudo bash -c 'cat /tmp/users_info.txt >> /etc/passwd'

sudo bash -c 'cat /tmp/shadow_info.txt >> /etc/shadow'

  1. Restore RSA Keys: Extract and set up the RSA keys for each user:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo mkdir -p /home/${user}/.ssh

    sudo tar xzf /tmp/${user}_ssh_rsa_keys.tar.gz -C /home/${user}/.ssh/ --strip-components=2

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


5. Test RSA Key Access

  • Ensure that users can log in using their RSA keys.
  • Check permissions for the .ssh directory and authorized_keys file:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


6. Optional: Validate SSH Keys

If you want to verify that the keys match on the new instance:

  • Compare the ~/.ssh/authorized_keys file on both instances for each user:

bash

 

diff /home/ram1/.ssh/authorized_keys <(ssh ec2-user@<new-ec2-ip> "cat /home/ram1/.ssh/authorized_keys")

This ensures the RSA keys are correctly migrated.

 

Do I have to create user in new EC2?

 

 

Yes, you need to create the users (ram1, ra2, ke3, rt4, yu6) on the new EC2 instance because user accounts are not automatically transferred between systems. Each Linux user must exist on the new system to ensure their home directories and permissions are correctly assigned. Here's how you can do it:


Steps to Create Users in the New EC2

  1. Create the Users with the Same UID and GID (Recommended)
    • Check the user IDs (UID) and group IDs (GID) on the source instance using:

bash

 

grep "^ram1:" /etc/passwd

Example output:

ruby

 

ram1:x:1001:1001:Ram User:/home/ram1:/bin/bash

    • Use the same UID and GID when creating the users on the new EC2 instance:

bash

 

sudo useradd -u 1001 -g 1001 -m -s /bin/bash ram1

sudo useradd -u 1002 -g 1002 -m -s /bin/bash ra2

sudo useradd -u 1003 -g 1003 -m -s /bin/bash ke3

sudo useradd -u 1004 -g 1004 -m -s /bin/bash rt4

sudo useradd -u 1005 -g 1005 -m -s /bin/bash yu6

    • If you don't need to preserve UID and GID, you can simply run:

bash

 

sudo useradd -m -s /bin/bash ram1

sudo useradd -m -s /bin/bash ra2

sudo useradd -m -s /bin/bash ke3

sudo useradd -m -s /bin/bash rt4

sudo useradd -m -s /bin/bash yu6

  1. Set Home Directory Permissions: Ensure each user owns their home directory:

bash

 

sudo chown -R ram1:ram1 /home/ram1

sudo chown -R ra2:ra2 /home/ra2

sudo chown -R ke3:ke3 /home/ke3

sudo chown -R rt4:rt4 /home/rt4

sudo chown -R yu6:yu6 /home/yu6


Copy SSH Keys to the New EC2

After creating the users, transfer their SSH keys as follows:

  1. Transfer the .ssh/authorized_keys Files: On the source instance, archive the .ssh directories for these users:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar czf ${user}_ssh.tar.gz -C /home/${user}/ .ssh

done

Use scp to copy these archives to the new EC2 instance:

bash

 

scp *_ssh.tar.gz ec2-user@<new-ec2-ip>:/tmp/

  1. Extract and Set Permissions on the New EC2: On the new EC2 instance, restore the .ssh directories:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo mkdir -p /home/${user}/.ssh

    sudo tar xzf /tmp/${user}_ssh.tar.gz -C /home/${user}/

    sudo chown -R ${user}:${user} /home/${user}/.ssh

    sudo chmod 700 /home/${user}/.ssh

    sudo chmod 600 /home/${user}/.ssh/authorized_keys

done


Test and Verify

  • Verify users can log in using their SSH keys:

bash

 

ssh -i <private-key> ram1@<new-ec2-ip>

  • Ensure .ssh directory and file permissions are correct:

bash

 

ls -ld /home/*/.ssh

ls -l /home/*/.ssh/authorized_keys


Optional: Automate User Creation and Key Setup

You can write a script to automate these steps if you need to repeat this process for multiple EC2 instances. Let me know if you'd like help creating a script!

 

 

 

automation script for Steps to Create Users in the New EC2

 

 

Here’s an automation script to create users in a new EC2 instance, set up their home directories, and copy their SSH keys. Save this script as setup_users.sh and execute it on the new EC2 instance.

Create Users Ec2

Steps to Use the Script

  1. Prepare SSH Key Archives: Ensure SSH key archives (<username>_ssh.tar.gz) for each user are transferred to /tmp on the new EC2 instance.
  2. Run the Script:
    • Save the script as setup_users.sh on the new EC2 instance.
    • Make it executable:

bash

 

chmod +x setup_users.sh

    • Execute the script:

bash

 

sudo ./setup_users.sh

  1. Verify Setup:
    • Ensure the users are created by listing them:

bash

 

cat /etc/passwd | grep -E 'ram1|ra2|ke3|rt4|yu6'

    • Check the .ssh/authorized_keys files for each user.

This script automates the process of creating users, preserving UIDs/GIDs, and setting up RSA keys. Let me know if you need further modifications!

 

 

If you done need - remove UIDs and GIDS

 

 

 

I've removed the UID and GID references from your script. Let me know if you need any further modifications!

 

I am little confuse. give me step by step with autmation to migrate users with RSA to new EC2 instance using posible automation

 

 

Here is a step-by-step guide for automating the migration of users with their RSA SSH keys to a new EC2 instance, utilizing the provided script.


Step 1: Prepare the Source System

  1. Archive SSH Keys for Each User:
    • On the source system, archive the .ssh directory of each user:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    sudo tar czf /tmp/${user}_ssh.tar.gz -C /home/${user} .ssh

done

    • This creates compressed .tar.gz files in /tmp.
  1. Transfer Archives to the New EC2 Instance:
    • Use scp to copy the .tar.gz files to the /tmp directory on the new EC2 instance:

bash

 

scp /tmp/*_ssh.tar.gz ec2-user@<new-ec2-ip>:/tmp/


Step 2: Execute the Automation Script on the New EC2 Instance

  1. Copy the Script to the New EC2 Instance: Save the provided script (setup_users.sh) on your local machine and copy it to the new EC2 instance:

bash

 

scp setup_users.sh ec2-user@<new-ec2-ip>:/home/ec2-user/

  1. Log in to the New EC2 Instance:

bash

 

ssh -i <private-key> ec2-user@<new-ec2-ip>

  1. Make the Script Executable: Once logged in, set executable permissions for the script:

bash

 

chmod +x /home/ec2-user/setup_users.sh

  1. Run the Script: Execute the script with sudo to create users and restore their SSH keys:

bash

 

sudo /home/ec2-user/setup_users.sh


Step 3: Verify the Setup

  1. Check Users: Ensure the users are created:

bash

 

cat /etc/passwd | grep -E 'ram1|ra2|ke3|rt4|yu6'

  1. Verify SSH Key Setup: Confirm each user has their .ssh/authorized_keys file:

bash

 

for user in ram1 ra2 ke3 rt4 yu6; do

    echo "Checking SSH keys for $user"

    sudo ls -l /home/$user/.ssh/authorized_keys

done

  1. Test SSH Access: Test logging in for one of the migrated users:

bash

 

ssh -i <private-key> ram1@<new-ec2-ip>


How the Script Works

  1. User Creation:
    • Checks if the user exists using id -u. If not, creates the user with useradd.
  2. Restore SSH Keys:
    • Looks for the corresponding .tar.gz archive in /tmp.
    • Extracts the archive into the user’s .ssh directory.
    • Sets proper ownership and permissions:
      • .ssh: 700
      • authorized_keys: 600
  3. Validation:
    • Confirms the user and SSH key setup at the end.

Step 4: Cleanup

  1. Remove the archives from /tmp on the new EC2 instance:

bash

 

sudo rm -f /tmp/*_ssh.tar.gz


This step-by-step guide ensures that your users and their SSH keys are successfully migrated to the new EC2 instance. Let me know if you need further clarification!

 

 

setup_users.sh

#!/bin/bash

 

# List of users to create

USERS=("ram1" "ra2" "ke3" "rt4" "yu6")

 

# Path to SSH key archives (assumes archives are already transferred to /tmp)

SSH_KEY_ARCHIVE_PATH="/tmp"

 

# Check if running as root

if [ "$EUID" -ne 0 ]; then

  echo "Please run as root or use sudo."

  exit 1

fi

 

# Create users and set up SSH keys

for USERNAME in "${USERS[@]}"; do

  echo "Creating user: $USERNAME"

 

  # Create user if it doesn't already exist

  if ! id -u "$USERNAME" > /dev/null 2>&1; then

    useradd -m -s /bin/bash "$USERNAME"

  else

    echo "User $USERNAME already exists. Skipping creation."

  fi

 

  # Restore SSH keys

  SSH_ARCHIVE="$SSH_KEY_ARCHIVE_PATH/${USERNAME}_ssh.tar.gz"

  if [ -f "$SSH_ARCHIVE" ]; then

    echo "Restoring SSH keys for $USERNAME from $SSH_ARCHIVE"

    mkdir -p "/home/$USERNAME/.ssh"

    tar xzf "$SSH_ARCHIVE" -C "/home/$USERNAME/.ssh" --strip-components=2

    chown -R "$USERNAME:$USERNAME" "/home/$USERNAME/.ssh"

    chmod 700 "/home/$USERNAME/.ssh"

    chmod 600 "/home/$USERNAME/.ssh/authorized_keys"

  else

    echo "SSH archive for $USERNAME not found at $SSH_ARCHIVE. Skipping key restoration."

  fi

 

done

 

# Verify setup

echo "User creation and SSH key setup complete. Verifying setup..."

for USERNAME in "${USERS[@]}"; do

  if id -u "$USERNAME" > /dev/null 2>&1; then

    echo "User $USERNAME exists. Home directory: $(eval echo ~$USERNAME)"

    if [ -f "/home/$USERNAME/.ssh/authorized_keys" ]; then

      echo "SSH keys for $USERNAME are set up."

    else

      echo "No SSH keys found for $USERNAME."

    fi

  else

    echo "User $USERNAME does not exist."

  fi

 

done

Thursday, 1 August 2024

java code for kafka for protobuf and gradle

 

redpanda-protobuf-consumer/

── build.gradle

── gradle/

   └── wrapper/

       ── gradle-wrapper.jar

       └── gradle-wrapper.properties

── gradlew

── gradlew.bat

── settings.gradle

── src/

   ── main/

      ── java/

         └── com/

             └── example/

                 └── redpanda/

                     └── ConsumerApp.java

      ── proto/

         └── BQT_Cloud_Streaming.proto

      └── resources/

          └── application.properties

   └── test/

       └── java/

 

Protobuf File

Download the BQT_Cloud_Streaming.proto file and place it in the src/main/proto/ directory.

build.gradle

plugins {

    id 'java'

    id 'com.google.protobuf' version '0.9.4'

}

 

group = 'com.example.redpanda'

version = '1.0.0'

 

repositories {

    mavenCentral()

}

 

dependencies {

    implementation 'org.apache.kafka:kafka-clients:3.5.1'

    implementation 'com.google.protobuf:protobuf-java:3.24.3'

}

 

protobuf {

    protoc {

        artifact = "com.google.protobuf:protoc:3.24.3"

    }

    generateProtoTasks {

        all().each { task ->

            task.builtins {

                java { }

            }

        }

    }

}

 

java {

    sourceCompatibility = JavaVersion.VERSION_17

    targetCompatibility = JavaVersion.VERSION_17

}

 

 

settings.gradle

rootProject.name = 'redpanda-protobuf-consumer'

 

__

ConsumerApp.java

package com.example.redpanda;


import org.apache.kafka.clients.consumer.ConsumerConfig;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.KafkaConsumer;

import org.apache.kafka.clients.consumer.ConsumerRecord;


import java.time.Duration;

import java.util.Collections;

import java.util.Properties;


// Import the generated Protobuf class from the 'pulse' package

import pulse.BQTMessageOuterClass.BQTMessage;


public class NYSE_BQTMessage_Consumer {


    public static void main(String[] args) {

        // Kafka consumer configuration settings

        Properties properties = new Properties();

        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); // Change to your Kafka server

        properties.put(ConsumerConfig.GROUP_ID_CONFIG, "your-group-id"); // Set your group ID

        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");

        properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // Change as needed

       

        // SASL configuration

        properties.put("sasl.mechanism", "PLAIN"); // or another mechanism if required

        properties.put("security.protocol", "SASL_PLAINTEXT"); // Change to SASL_SSL if needed

        properties.put("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required " +

                "username='your-username' " + // Set your username

                "password='your-password';"); // Set your password


        // Create Kafka consumer

        KafkaConsumer<String, byte[]> consumer = new KafkaConsumer<>(properties);


        // Subscribe to the topic

        consumer.subscribe(Collections.singletonList("bqt-topic"));


        System.out.println("Listening for messages on topic: bqt-topic");


        try {

            while (true) {

                // Poll for records

                ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));

                for (ConsumerRecord<String, byte[]> record : records) {

                    try {

                        // Parse the Protobuf message

                        BQTMessage message = BQTMessage.parseFrom(record.value());

                        System.out.println("Received message:");

                        System.out.println("Symbol: " + message.getSymbol());

                        System.out.println("Price: " + message.getPrice());

                        System.out.println("Timestamp: " + message.getTimestamp());

                    } catch (Exception e) {

                        System.err.println("Failed to parse message: " + e.getMessage());

                    }

                }

            }

        } finally {

            consumer.close();

        }

    }

}


Build the Project

./gradlew build

 

Run the Consumer

./gradlew run

 

Notes

  1. Protobuf Classes: When you run the Gradle build, the Protobuf plugin generates Java classes in build/generated/source/proto/main/java/. Use these classes (like BQTMessage above) in your code.
  2. Kafka Setup: Ensure Redpanda or Kafka is running and that the topic (bqt-topic) is available with Protobuf-encoded messages.
  3. Dependencies: Update dependency versions if needed.
  4. Testing: Use a tool like kafkacat or a custom producer to send test messages to your topic.

 

 

Where should i copy file which create by protoc complier?

 

When using the Gradle Protobuf plugin, you don’t need to manually copy files generated by the Protobuf compiler. Gradle will handle the compilation and place the generated files in a designated directory.

Where Protobuf Files Are Generated

After running ./gradlew build, the Protobuf plugin generates Java files from the .proto definitions and places them in:

 

build/generated/source/proto/main/java/

 

These files are automatically included in the project's classpath by Gradle.

 

Using Gradle's Automatic Protobuf Integration

If you use the Gradle Protobuf plugin, ensure your .proto files are in the following directory:

src/main/proto/

 

When you build the project, Gradle automatically:

  1. Compiles the .proto files.
  2. Places the generated .java files in build/generated/source/proto/main/java/.
  3. Includes them in your project's classpath.


.bat file

@echo off
REM Set root directory
set ROOT=C:\redpanda-protobuf-consumer

REM Create folder structure
mkdir "%ROOT%\src\main\java\com\example\redpanda"
mkdir "%ROOT%\src\main\proto"

REM Create build.gradle file
echo plugins {> "%ROOT%\build.gradle"
echo     id 'java'>> "%ROOT%\build.gradle"
echo     id 'com.google.protobuf' version '0.9.4'>> "%ROOT%\build.gradle"
echo }>> "%ROOT%\build.gradle"
echo.>> "%ROOT%\build.gradle"
echo group = 'com.example.redpanda'>> "%ROOT%\build.gradle"
echo version = '1.0.0'>> "%ROOT%\build.gradle"
echo.>> "%ROOT%\build.gradle"
echo repositories {>> "%ROOT%\build.gradle"
echo     mavenCentral()>> "%ROOT%\build.gradle"
echo }>> "%ROOT%\build.gradle"
echo.>> "%ROOT%\build.gradle"
echo dependencies {>> "%ROOT%\build.gradle"
echo     implementation 'org.apache.kafka:kafka-clients:3.5.1'>> "%ROOT%\build.gradle"
echo     implementation 'com.google.protobuf:protobuf-java:3.24.3'>> "%ROOT%\build.gradle"
echo }>> "%ROOT%\build.gradle"
echo.>> "%ROOT%\build.gradle"
echo protobuf {>> "%ROOT%\build.gradle"
echo     protoc {>> "%ROOT%\build.gradle"
echo         artifact = "com.google.protobuf:protoc:3.24.3">> "%ROOT%\build.gradle"
echo     }>> "%ROOT%\build.gradle"
echo     generateProtoTasks {>> "%ROOT%\build.gradle"
echo         all().each { task ->>> "%ROOT%\build.gradle"
echo             task.builtins {>> "%ROOT%\build.gradle"
echo                 java { }>> "%ROOT%\build.gradle"
echo             }>> "%ROOT%\build.gradle"
echo         }>> "%ROOT%\build.gradle"
echo     }>> "%ROOT%\build.gradle"
echo }>> "%ROOT%\build.gradle"
echo.>> "%ROOT%\build.gradle"
echo java {>> "%ROOT%\build.gradle"
echo     sourceCompatibility = JavaVersion.VERSION_17>> "%ROOT%\build.gradle"
echo     targetCompatibility = JavaVersion.VERSION_17>> "%ROOT%\build.gradle"
echo }>> "%ROOT%\build.gradle"

REM Create settings.gradle file
echo rootProject.name = 'redpanda-protobuf-consumer'> "%ROOT%\settings.gradle"

REM Create NYSE_BQTMessage_Consumer.java file
echo package com.example.redpanda;> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import org.apache.kafka.clients.consumer.ConsumerConfig;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import org.apache.kafka.clients.consumer.ConsumerRecords;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import org.apache.kafka.clients.consumer.KafkaConsumer;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import org.apache.kafka.clients.consumer.ConsumerRecord;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import java.time.Duration;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import java.util.Collections;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import java.util.Properties;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo import nyse.bqt.BQTMessageOuterClass.BQTMessage;>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo public class NYSE_BQTMessage_Consumer {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo     public static void main(String[] args) {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         Properties properties = new Properties();>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         properties.put(ConsumerConfig.GROUP_ID_CONFIG, "bqt-consumer-group");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         KafkaConsumer<String, byte[]> consumer = new KafkaConsumer<>(properties);>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         consumer.subscribe(Collections.singletonList("bqt-topic"));>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         System.out.println("Listening for messages on topic: bqt-topic");>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo.>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         try {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo             while (true) {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                 ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                 for (ConsumerRecord<String, byte[]> record : records) {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                     try {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                         BQTMessage message = BQTMessage.parseFrom(record.value());>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                         System.out.println("Received: " + message.getSymbol());>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                     } catch (Exception e) {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                         System.err.println("Error: " + e.getMessage());>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                     }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo                 }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo             }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         } finally {>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo             consumer.close();>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo         }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo     }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"
echo }>> "%ROOT%\src\main\java\com\example\redpanda\NYSE_BQTMessage_Consumer.java"

REM Create placeholder Protobuf file
echo // Placeholder for BQT_Cloud_Streaming.proto> "%ROOT%\src\main\proto\BQT_Cloud_Streaming.proto"

echo Done! The project structure has been created.
pause

 readme

# NYSE BQT Message Consumer

 

This project is a Java-based Kafka consumer that reads messages from a Kafka topic. The messages are serialized using Protocol Buffers (Protobuf) format. The consumer connects to a Kafka cluster using SASL authentication.

 

## Project Structure

 

 

## Required Versions

 

- **Java**: Version 17

- **Gradle**: Version 8.0 or higher

- **Kafka**: Version 3.5.1

- **Protobuf**: Version 3.24.3

 

## Gradle Configuration

 

### build.gradle

 

This file contains the configuration needed to build the project:

 

```groovy

plugins {

    id 'java'

    id 'com.google.protobuf' version '0.9.4'

    id 'application'

}

 

mainClassName = 'com.example.redpanda.NYSE_BQTMessage_Consumer'

 

repositories {

    mavenCentral()

}

 

dependencies {

    implementation 'org.apache.kafka:kafka-clients:3.5.1'

    implementation 'com.google.protobuf:protobuf-java:3.24.3'

}

 

protobuf {

    protoc {

        artifact = "com.google.protobuf:protoc:3.24.3"

    }

    generateProtoTasks {

        all().each { task ->

            task.builtins {

                java { }

            }

        }

    }

}

 

java {

    sourceCompatibility = JavaVersion.VERSION_17

    targetCompatibility = JavaVersion.VERSION_17

}

 

 

Running the Consumer

  1. Build the Project: Use the following command to build the project:

bash

Copy code

./gradlew clean build   # For Linux/Mac

gradlew clean build      # For Windows

  1. Run the Consumer: Start the consumer application with:

bash

Copy code

./gradlew run   # For Linux/Mac

gradlew run      # For Windows