Saturday, 9 December 2023

python code like tcpdump

 

from scapy.all import sniff, TCP, IP, raw

import datetime

import logging

 

# Configure logging

logging.basicConfig(

    filename="packet_capture.log",  # File to save the output

    level=logging.INFO,            # Log level

    format="%(asctime)s - %(message)s",  # Format for log entries

    datefmt="%Y-%m-%d %H:%M:%S"    # Timestamp format

)

 

# Define the Redpanda port (Kafka typically uses 9092; adjust for your setup)

REDPANDA_PORT = 9092

 

def log_and_print(message):

    """Logs the message to a file and prints it to the console."""

    print(message)

    logging.info(message)

 

def packet_callback(packet):

    # Extract arrival time

    arrival_time = datetime.datetime.now()

 

    # Check if the packet has IP and TCP layers

    if IP in packet and TCP in packet:

        # Extract general packet details

        ip_src = packet[IP].src

        ip_dst = packet[IP].dst

        tcp_sport = packet[TCP].sport

        tcp_dport = packet[TCP].dport

        iface = packet.sniffed_on if hasattr(packet, 'sniffed_on') else "Unknown Interface"

        ttl = packet[IP].ttl

        total_length = packet[IP].len

 

        # Extract TCP-specific details

        seq = packet[TCP].seq

        ack = packet[TCP].ack

        window = packet[TCP].window

        flags = packet.sprintf("%TCP.flags%")  # TCP flags as a string

        checksum = packet[TCP].chksum

 

        # Log detailed packet info

        log_and_print(f"Interface: {iface} | IP Packet: {ip_src}:{tcp_sport} -> {ip_dst}:{tcp_dport} | Protocol: TCP")

        log_and_print(f"  Packet Length: {total_length} bytes | TTL: {ttl} | Checksum: {hex(checksum)}")

        log_and_print(f"  Sequence Number: {seq} | Acknowledgment Number: {ack} | Window Size: {window}")

        log_and_print(f"  Flags: {flags}")

 

        # Extract and log raw packet data

        raw_data = raw(packet)

        log_and_print(f"Raw Packet Data: {raw_data.hex()}")

 

        # Check if traffic is for Redpanda

        if tcp_dport == REDPANDA_PORT or tcp_sport == REDPANDA_PORT:

            try:

                # Decode payload as UTF-8

                payload = raw_data.decode("utf-8")

 

                # Assuming message has a timestamp in nanoseconds as the first field

                source_time_ns = int(payload.split(",")[0])  # Adjust based on message format

                source_time = datetime.datetime.fromtimestamp(source_time_ns / 1e9)

 

                # Calculate latency

                latency = (arrival_time - source_time).total_seconds() * 1000  # Convert to milliseconds

 

                # Log latency details

                log_and_print(f"  Redpanda Message: {payload}")

                log_and_print(f"  Source Time: {source_time}, Latency: {latency:.2f} ms")

            except Exception as e:

                log_and_print(f"  Error decoding Redpanda message: {e}")

        log_and_print("-" * 50)

 

# Start sniffing packets

log_and_print("Starting packet capture... Press Ctrl+C to stop.")

sniff(filter="tcp", prn=packet_callback, store=False, iface="any")

 

 

Explanation of Changes

  1. Logging Setup:
    • The logging.basicConfig function is configured to:
      • Save log messages to a file (packet_capture.log).
      • Include timestamps and message formatting.
  2. log_and_print Function:
    • A helper function that:
      • Prints the message to the console.
      • Logs the same message to the file using logging.info.
  3. Output to File:
    • All relevant output (packet details, raw data, and decoded messages) is saved in packet_capture.log.
  4. File Format:
    • The log file will contain each entry with a timestamp and message, making it easy to analyze later.

 

 

2024-11-25 14:15:18 - Starting packet capture... Press Ctrl+C to stop.

2024-11-25 14:15:20 - Interface: wlan0 | IP Packet: 192.168.1.10:52345 -> 192.168.1.20:9092 | Protocol: TCP

2024-11-25 14:15:20 -   Packet Length: 60 bytes | TTL: 64 | Checksum: 0x8c7b

2024-11-25 14:15:20 -   Sequence Number: 123456789 | Acknowledgment Number: 987654321 | Window Size: 64240

2024-11-25 14:15:20 -   Flags: PA

2024-11-25 14:15:20 - Raw Packet Data: 4500003c1a2b400040067dabc0a8010ac0a80114

2024-11-25 14:15:20 -   Redpanda Message: 1698322918123456789,Hello, World!

2024-11-25 14:15:20 -   Source Time: 2024-11-25 14:15:18.123456, Latency: 22.22 ms

2024-11-25 14:15:20 - --------------------------------------------------

Java code for Kafka

 

Step 1: Install Protocol Buffers (protoc)

  1. Download Protocol Buffers:
  2. Install Protocol Buffers:
    • Extract the downloaded file and move the protoc binary to a directory included in your PATH.

Example for Linux/Mac:

bash

sudo mv protoc /usr/local/bin/

Example for Windows:

    • Add the protoc binary path to your system environment variables.
  1. Verify Installation: Run the following command to confirm installation:

bash

 

protoc --version


Step 2: Generate Java Classes from Proto File

  1. Locate your .proto file (e.g.,  PPP_Cloud_Streaming.proto).
  2. Run protoc to generate Java classes:

bash

 

protoc --java_out=PATH_TO_OUTPUT_DIRECTORY PATH_TO_PROTO_FILE

Example:

bash

 

protoc --java_out=./src/main/java  PPP_Cloud_Streaming.proto

  1. Include the generated Java file in your Java project.

Step 3: Set Up Java Project with Kafka Dependencies

  1. Create a Maven or Gradle project.
  2. Add the required dependencies to pom.xml (for Maven) or build.gradle (for Gradle).

Maven:

Add the following dependencies for Kafka and Protocol Buffers:

xml

 

<dependencies>

    <!-- Kafka Client -->

    <dependency>

        <groupId>org.apache.kafka</groupId>

        <artifactId>kafka-clients</artifactId>

        <version>3.5.1</version> <!-- Use the appropriate Kafka version -->

    </dependency>

    <!-- Protocol Buffers -->

    <dependency>

        <groupId>com.google.protobuf</groupId>

        <artifactId>protobuf-java</artifactId>

        <version>3.24.0</version> <!-- Use the version matching your protoc -->

    </dependency>

</dependencies>

Gradle:

Add the dependencies to build.gradle:

gradle

 

dependencies {

    // Kafka Client

    implementation 'org.apache.kafka:kafka-clients:3.5.1'

    // Protocol Buffers

    implementation 'com.google.protobuf:protobuf-java:3.24.0'

}


Step 4: Java Code Implementation

Here’s a Java equivalent of your Python Kafka consumer:

java

 

import org.apache.kafka.clients.consumer.ConsumerRecord;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.KafkaConsumer;

 

import java.time.Duration;

import java.util.Collections;

import java.util.Properties;

 

// Import generated Protobuf classes

import your.package. PPPCloudStreaming;

 

public class KafkaProtobufConsumer {

    private final KafkaConsumer<String, byte[]> consumer;

    private final String topicName;

 

    public KafkaProtobufConsumer(String topicName) {

        this.topicName = topicName;

 

        // Kafka consumer configuration

        Properties props = new Properties();

        props.setProperty("bootstrap.servers", "broker.lt.use1. :9094"); // Adjust as needed

        props.setProperty("group.id", "my-group");

        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");

        props.setProperty("security.protocol", "SASL_PLAINTEXT");

        props.setProperty("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"SCRAM\" password=\"password\";");

 

        this.consumer = new KafkaConsumer<>(props);

    }

 

    public void start() {

        consumer.subscribe(Collections.singletonList(topicName));

        try {

            while (true) {

                ConsumerRecords<String, byte[]> records = consumer.poll(Duration.ofMillis(100));

                for (ConsumerRecord<String, byte[]> record : records) {

                    // Deserialize Protobuf message

                     PPPCloudStreaming. PPPMessage message =  PPPCloudStreaming. PPPMessage.parseFrom(record.value());

 

                    // Process the message

                    System.out.println("Received message: " + message);

                }

            }

        } catch (Exception e) {

            e.printStackTrace();

        } finally {

            consumer.close();

        }

    }

 

    public static void main(String[] args) {

        KafkaProtobufConsumer consumer = new KafkaProtobufConsumer(" PPP_trd_str_1");

        consumer.start();

    }

}


Key Points:

  1. Replace your.package in the Java code with the package where the generated Protobuf classes are located.
  2. Make sure the Protobuf .proto file is up to date with the message structure.
  3. Test the application by consuming messages from your Kafka topic.

 

Friday, 8 December 2023

print how many interface instance has

 import psutil


# Get the network interfaces on the instance

interfaces = psutil.net_if_addrs()


# Print the names of all network interfaces

print("Network interfaces:")

for interface in interfaces:

    print(interface)


Thursday, 7 December 2023

python- tcpdump code

from scapy.all import sniff, TCP, IP, raw

import datetime

import logging

 

# Configuration

SERVER_A_IP = "192.168.1.10"  # Replace with Server A's IP

SERVER_A_PORT = 9092          # Port on Server A used to send data

SERVER_B_IP = "192.168.1.20"  # Replace with Server B's IP

 

# Logging setup

logging.basicConfig(

    filename="server_b_pull.log",  # Log file

    level=logging.INFO,

    format="%(asctime)s - %(message)s",

    datefmt="%Y-%m-%d %H:%M:%S"

)

 

def log_and_print(message):

    """Logs the message to a file and prints it to the console."""

    print(message)

    logging.info(message)

 

def packet_callback(packet):

    """Callback function to process captured packets."""

    arrival_time = datetime.datetime.now()

 

    # Ensure the packet has IP and TCP layers

    if IP in packet and TCP in packet:

        ip_src = packet[IP].src

        ip_dst = packet[IP].dst

        tcp_sport = packet[TCP].sport

        tcp_dport = packet[TCP].dport

 

        # Filter packets coming from Server A on port 9092 to Server B

        if ip_src == SERVER_A_IP and tcp_sport == SERVER_A_PORT and ip_dst == SERVER_B_IP:

            # Extract raw data

            raw_data = raw(packet)

 

            # Log general packet information

            log_and_print(f"Packet from {ip_src}:{tcp_sport} -> {ip_dst}:{tcp_dport}")

            log_and_print(f"  Raw Packet Data: {raw_data.hex()}")

 

            # Attempt to decode payload and extract timestamp (if applicable)

            try:

                payload = raw_data.decode("utf-8")  # Assuming payload is UTF-8 encoded

                source_time_ns = int(payload.split(",")[0])  # Adjust based on payload format

                source_time = datetime.datetime.fromtimestamp(source_time_ns / 1e9)

 

                # Calculate latency

                latency = (arrival_time - source_time).total_seconds() * 1000  # Convert to milliseconds

                log_and_print(f"  Source Timestamp: {source_time}, Arrival Time: {arrival_time}, Latency: {latency:.2f} ms")

            except Exception as e:

                log_and_print(f"  Error decoding payload or calculating latency: {e}")

 

            log_and_print("-" * 50)

 

# Define the packet filter

packet_filter = f"tcp and src host {SERVER_A_IP} and src port {SERVER_A_PORT} and dst host {SERVER_B_IP}"

 

# Start sniffing packets

log_and_print(f"Starting packet capture for traffic from {SERVER_A_IP}:{SERVER_A_PORT} to Server B ({SERVER_B_IP})...")

sniff(filter=packet_filter, prn=packet_callback, store=False, iface="any")




https://pypi.org/project/psutil/#files
https://pypi.org/project/scapy/#files

logs to sns

 

Here's a comprehensive guide to setting up a monitoring system for your Redpanda server using Amazon CloudWatch, including creating a metric filter to monitor for service down events, configuring an alarm that triggers based on this metric, and sending notifications to an SNS topic if the alarm fires.

Step 1: Create an SNS Topic

First, you need to create an SNS topic to receive notifications.

  1. Open the Amazon SNS Console:
  2. Create a Topic:
    • Click on Topics in the left navigation pane.
    • Click the Create topic button.
    • Select Standard or FIFO (Standard is usually sufficient).
    • Fill in the required details:
      • Name: Enter a name for your topic (e.g., RedpandaAlerts).
    • Click Create topic.
  3. Subscribe to the Topic:
    • Click on your newly created topic to view its details.
    • Click Create subscription.
    • Select a protocol (e.g., Email, SMS, Lambda, etc.) and enter the necessary endpoint (e.g., email address).
    • Click Create subscription.
    • If you chose Email, check your inbox and confirm the subscription.

Step 2: Configure CloudWatch Agent to Send Logs

Ensure that the CloudWatch Agent is configured to send logs from /var/log/messages to CloudWatch Logs.

  1. Modify the CloudWatch Agent configuration file (e.g., /opt/aws/amazon-cloudwatch-agent/bin/config.json) to include the following:

json

 

{

  "logs": {

    "logs_collected": {

      "files": {

        "collect_list": [

          {

            "file_path": "/var/log/messages",

            "log_group_name": "RedpandaLogs",

            "log_stream_name": "{instance_id}",

            "retention_in_days": 14

          }

        ]

      }

    }

  }

}

  1. Restart the CloudWatch Agent to apply changes:

bash

 

sudo systemctl restart amazon-cloudwatch-agent

Step 3: Create a Metric Filter

Set up a metric filter to count the occurrences of specific error messages indicating that the Redpanda server is down.

  1. Open the CloudWatch Console:
  2. Create Metric Filter:
    • Select Logs and find your log group (e.g., RedpandaLogs).
    • Click on Create Metric Filter.
    • Define the filter pattern. For example, you might use:

 

"Redpanda service failed to start" OR "Unable to connect to Redpanda" OR "timeout error"

    • Click Next.
  1. Assign Metric Details:
    • Give your metric a name (e.g., RedpandaServiceDown) and assign it to a namespace (e.g., Redpanda/Metrics).
    • Set the metric value to 1.
    • Click Create Filter.

Step 4: Create a CloudWatch Alarm

Now, create an alarm based on the metric filter to notify you if the Redpanda service is down.

  1. Open the CloudWatch Console:
    • Navigate to the Alarms section.
  2. Create Alarm:
    • Click on Create Alarm.
    • Select the metric you just created (RedpandaServiceDown).
    • Click Select metric.
  3. Define Alarm Conditions:
    • Set the condition to trigger the alarm when the metric is greater than 0 for a period of 5 minutes.
    • Click Next.
  4. Configure Actions:
    • In the Notification section, select the SNS topic you created earlier (RedpandaAlerts).
    • Optionally, configure actions for the OK state to receive notifications when the service is back online.
  5. Name and Create the Alarm:
    • Name your alarm (e.g., "Redpanda Service Down Alarm").
    • Review your settings and click Create Alarm.

Step 5: Test Your Setup

To ensure everything is working correctly, you can simulate an error:

  1. Simulate an Error:
    • Stop the Redpanda service or create log entries that match your error patterns.
  2. Check Alarm Status:
    • Navigate back to the Alarms section in CloudWatch and see if your alarm has entered the ALARM state.
  3. Verify SNS Notifications:
    • Check if the subscribers receive the notifications sent to the SNS topic.

Summary of Steps

  • Create an SNS Topic: Set up a topic for notifications.
  • Configure CloudWatch Agent: Ensure logs are sent to CloudWatch Logs.
  • Create a Metric Filter: Count occurrences of specific error messages.
  • Create a CloudWatch Alarm: Trigger an alarm when the service is down and notify via SNS.
  • Test the Setup: Simulate a failure and verify notifications.

By following these steps, you'll establish a robust monitoring system for your Redpanda server that promptly alerts your team in case of service downtime.

 

Saturday, 2 December 2023

Python installation

 

1. Download Python 3.10 Precompiled Binaries

Python releases come with precompiled binary installers for various platforms. On Windows or another Linux machine with internet access, you can download the Python 3.10 precompiled binaries.

Steps to Download Python 3.10 Binaries:

  1. Go to the Official Python Downloads Page:
  2. Download the Linux Binary:
    • Python provides precompiled tarball binaries (.tar.xz) for Linux, which include all necessary files to run Python.
    • Download the appropriate precompiled binary:
      • For x86_64 architecture (64-bit), download:
        Python-3.10.0.tar.xz
  3. Extract the Python Tarball: After downloading the tarball, extract it on your local machine (Windows or Linux):

bash

 

tar -xvf Python-3.10.0.tar.xz

  1. Transfer the Extracted Python Files to EC2: Once you’ve extracted the Python files on your local machine, transfer the entire directory to the EC2 instance using SCP, WinSCP, or another file transfer tool.

Using SCP:

bash

 

scp -i your-key.pem Python-3.10.0/ user@ec2-instance-ip:/home/user/python310/

Using WinSCP:

    • Open WinSCP, connect to your EC2 instance, and upload the extracted Python directory to /home/user/python310/.

2. Download Precompiled Dependencies

Python has several key dependencies like zlib, openssl, libffi, and others that need to be installed for proper functionality. These libraries can also be downloaded as precompiled binaries and transferred to the EC2 instance.

Steps to Download Precompiled Dependencies:

  1. Visit RPM Repositories: Use trusted RPM repositories like RPMFind or CentOS Vault to download the precompiled binaries for the following libraries:
    • GCC (GNU Compiler Collection)
    • zlib-devel
    • openssl-devel
    • libffi-devel
    • bzip2-devel
    • sqlite-devel
  2. Download RPMs for Your EC2 Version:
    • Make sure to download the correct version of these dependencies compatible with the Linux distribution running on your EC2 instance (e.g., Amazon Linux 2, Red Hat 8.x).

For example, you might search and download:

    • gcc-<version>.rpm
    • zlib-devel-<version>.rpm
    • openssl-devel-<version>.rpm
    • libffi-devel-<version>.rpm
    • bzip2-devel-<version>.rpm
    • sqlite-devel-<version>.rpm
  1. Transfer the Dependencies to EC2: After downloading the RPM files, transfer them to the EC2 instance using SCP or WinSCP.

Using SCP:

bash

 

scp -i your-key.pem /path/to/rpms/*.rpm user@ec2-instance-ip:/home/user/rpms/

Using WinSCP:

    • Connect to your EC2 instance via WinSCP and upload the RPM files to /home/user/rpms/.

3. Install the Precompiled Python and Dependencies on EC2

Install Python:

Once you’ve transferred the Python precompiled files, you can set it up on the EC2 instance.

  1. Extract Python on EC2: If you transferred a .tar.xz file (or the entire extracted folder), use the following commands to extract it on the EC2 instance:

bash

 

cd /home/user

tar -xvf Python-3.10.0.tar.xz

  1. Make the Python Executable: Ensure that the Python binary is executable:

bash

 

chmod +x /home/user/python310/python

  1. Set Up Python Environment: Set up the environment to use the Python binary:

bash

 

echo 'export PATH=/home/user/python310:$PATH' >> ~/.bashrc

source ~/.bashrc

  1. Verify Python Installation: Run the following to verify Python:

bash

 

python3 --version

Install Dependencies:

Install the transferred dependencies using the RPM command.

  1. Install RPM Packages: Navigate to the directory where you uploaded the RPM files and install them:

bash

 

cd /home/user/rpms

sudo rpm -ivh *.rpm

This command will install the necessary libraries such as zlib, openssl, libffi, etc.

  1. Verify Dependencies: After installation, you can verify that the dependencies are properly installed:

bash

 

rpm -qi gcc

rpm -qi zlib-devel

rpm -qi openssl-devel


4. Verify Python and Dependencies:

Once the Python binary and dependencies are installed, verify the setup:

  1. Check Python Version:

bash

 

python3 --version

  1. Check Installed Libraries: Ensure that libraries like openssl and zlib are available by testing with Python:

bash

 

python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"

python3 -c "import zlib; print(zlib.ZLIB_VERSION)"

 

Install Python 3.10 where there is no internet

To install Python 3.10 on your EC2 instance without internet access, you can use your Windows machine with connectivity to the EC2 instance to download the necessary files and transfer them to the EC2 instance. Here's how you can do it step by step:


1. Check Current Python Version on EC2

On your EC2 instance, check if Python is installed and its version:

bash

 

python3 --version


2. Download Python 3.10 on Your Windows Machine

  1. Visit the official Python website: Python 3.10 Downloads.
  2. Download the source tarball (Python-3.10.x.tgz) and the required dependencies for offline installation.

3. Download Required Dependencies

Since your EC2 instance runs Red Hat 8.6:

  1. Identify the dependencies for Python 3.10 (e.g., gcc, make, zlib-devel, openssl-devel, etc.).
  2. On your Windows machine, download these RPM packages from the Red Hat Package Manager or using a CentOS 8/RHEL 8 system that matches your EC2 instance's configuration.

4. Transfer Files to EC2

  1. Install an SCP client on your Windows machine, such as WinSCP or use an SCP command.
  2. Transfer the downloaded files to your EC2 instance:

bash

 

scp -i your-key.pem Python-3.10.x.tgz user@ec2-instance-ip:/path/to/destination


5. Install Python 3.10

On your EC2 instance:

  1. Extract the tarball:

bash

 

tar -xvzf Python-3.10.x.tgz

cd Python-3.10.x

  1. Install dependencies from RPM files:

bash

 

sudo rpm -ivh /path/to/dependencies/*.rpm

  1. Build and install Python:

bash

 

./configure --enable-optimizations

make

sudo make install

  1. Verify the installation:

bash

 

python3.10 --version


6. Set Python 3.10 as Default (Optional)

If you want to use Python 3.10 as the default python3:

bash

 

sudo alternatives --install /usr/bin/python3 python3 /usr/local/bin/python3.10 1

sudo alternatives --config python3


7. Troubleshooting

If you encounter any missing dependency errors, ensure all required RPMs are downloaded and transferred to the EC2 instance.

Let me know if you need additional help!

 

 

 

Let me clarify Point 3: Download Required Dependencies for you.

Why Dependencies Are Needed?

To compile and install Python 3.10 from source, your EC2 instance requires specific libraries and tools, such as:

  • A C compiler (like gcc).
  • Libraries for compression, SSL, and other Python features (zlib-devel, openssl-devel, etc.).

Since your EC2 instance does not have internet access, these dependencies need to be downloaded on your Windows machine and transferred to the EC2 instance.


Steps to Download Required Dependencies

Step 1: Identify Required Packages

Python 3.10 typically requires:

  • gcc (for compiling the source code)
  • make
  • zlib-devel
  • openssl-devel
  • libffi-devel
  • bzip2-devel
  • sqlite-devel
  • Other optional libraries for Python features

Step 2: Use a System with Internet Access

Since you're on a Windows machine, you can either:

  1. Use a Linux VM with internet connectivity (e.g., Ubuntu or CentOS running in VirtualBox or WSL).
  2. Access a package repository to download these dependencies as .rpm files.

Step 3: Download RPM Files

  1. Use RPMFind.net to locate and download the appropriate RPM files for Red Hat 8.6 (the OS running on your EC2 instance). For example:
    • Search for gcc and download its .rpm.
    • Search for openssl-devel, zlib-devel, etc.
    • Ensure all dependencies of these packages (sub-dependencies) are also downloaded.
  2. Alternatively, if you have access to a system with yum (like another Red Hat-based system):

bash

 

yumdownloader gcc make zlib-devel openssl-devel libffi-devel bzip2-devel sqlite-devel

This command downloads the .rpm files without installing them.

Step 4: Transfer RPM Files to EC2

  1. Use an SCP tool (e.g., WinSCP) or scp command to copy the RPM files to the EC2 instance:

bash

 

scp -i your-key.pem /path/to/rpm/files/*.rpm user@ec2-instance-ip:/path/to/destination

Step 5: Install RPMs on EC2

On the EC2 instance:

  1. Navigate to the directory where you copied the .rpm files:

bash

 

cd /path/to/rpm/files

  1. Install the packages:

bash

 

sudo rpm -ivh *.rpm

If there are dependency errors, ensure all required sub-dependencies are also present and install them.


This ensures your EC2 instance has all the tools and libraries to compile Python 3.10 successfully.



Option 1: Use a Compatible Repository

Use CentOS 8 Stream or Red Hat 8 compatible RPMs. These are often available from trusted repositories like:

  1. CentOS Vault (CentOS 8 is binary-compatible with Red Hat 8):
    https://vault.centos.org/
  2. Red Hat CDN (requires subscription):
    If you have a Red Hat Developer subscription, you can download packages from Red Hat's official repository using a Red Hat system.

Search for the following package names:

  • gcc
  • gcc-c++
  • glibc-devel
  • glibc-headers
  • libgcc

Option 2: Use a Yum-Enabled System

If you have a CentOS or Red Hat system with internet access:

  1. Install yum-utils:

bash

 

sudo yum install yum-utils

  1. Download the required packages for offline use:

bash

 

yumdownloader gcc glibc-devel zlib-devel openssl-devel libffi-devel bzip2-devel sqlite-devel

  1. Transfer these .rpm files to your Windows machine or directly to the EC2 instance.

 wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/gcc-8.5.0-4.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/AppStream/x86_64/os/Packages/gcc-c++-8.5.0-4.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/glibc-devel-2.28-211.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/libffi-devel-3.1-22.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/make-4.2.1-11.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/bzip2-devel-1.0.6-26.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/zlib-devel-1.2.11-19.el8.x86_64.rpm

wget https://mirror.centos.org/centos/8/BaseOS/x86_64/os/Packages/openssl-devel-1.1.1k-5.el8.x86_64.rpm


 


Friday, 1 December 2023

print that which network interface it is using to send data

 from confluent_kafka import Producer

import socket

import netifaces


# Configuration for Redpanda Producer

conf = {

    'bootstrap.servers': '192.168.1.100:9092',  # Replace with your Redpanda broker

    'client.id': socket.gethostname()          # Set client ID

}


# Create a producer instance

producer = Producer(conf)


# Function to retrieve the network interface based on the IP address

def get_network_interface(ip_address):

    for iface in netifaces.interfaces():

        addrs = netifaces.ifaddresses(iface)

        if netifaces.AF_INET in addrs:

            for addr in addrs[netifaces.AF_INET]:

                if addr.get('addr') == ip_address:

                    return iface

    return "Unknown Interface"


# Function to log the network interface being used

def log_network_info():

    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:

        sock.connect(('192.168.1.100', 9092))  # Replace with your broker's address

        local_ip, local_port = sock.getsockname()


        # Get the network interface using the local IP

        interface = get_network_interface(local_ip)

        print(f"Data sent using Interface: {interface} | Local IP: {local_ip} | Local Port: {local_port}")


# Function to handle delivery reports

def delivery_report(err, msg):

    if err is not None:

        print(f"Message delivery failed: {err}")

    else:

        print(f"Message delivered to {msg.topic()} [{msg.partition()}] at offset {msg.offset()}")


# Send a test message

topic = "test_topic"  # Replace with your topic

key = "key1"

value = "Hello, Redpanda!"


print("Starting Redpanda producer...")


log_network_info()  # Log network details before sending data

producer.produce(topic, key=key, value=value, callback=delivery_report)

producer.flush()


Installing Maven

 

Installing Maven

 

On Linux (Ubuntu/Debian)

  1. Install Maven using APT:

bash

sudo apt update

sudo apt install maven

  1. Verify Installation:

bash

mvn -version

Output should display the installed Maven version.

  1. Manual Installation (Optional):

bash

 

tar -xvzf apache-maven-X.X.X-bin.tar.gz

    • Move it to /opt:

bash

 

sudo mv apache-maven-X.X.X /opt/maven

    • Add Maven to your PATH:

bash

 

echo 'export PATH=/opt/maven/bin:$PATH' >> ~/.bashrc

source ~/.bashrc


On Windows

  1. Download Maven:
  2. Extract Maven:
    • Extract the archive to a directory (e.g., C:\Program Files\Maven).
  3. Set Environment Variables:
    • Go to Control Panel > System > Advanced System Settings > Environment Variables.
    • Add a new system variable:
      • Name: MAVEN_HOME
      • Value: C:\Program Files\Maven
    • Edit the Path variable and add:

plaintext

 

C:\Program Files\Maven\bin

  1. Verify Installation: Open a new terminal and run:

bash

 

mvn -version

Output should display the Maven version.


On macOS

  1. Install Maven with Homebrew:

bash

 

brew install maven

  1. Verify Installation:

bash

 

mvn -version

Python Script to Transfer a File to EC2 Using SSM

 import boto3

import base64

import os


def transfer_files_to_ec2(file_paths, instance_id, region="us-east-1"):

    # Initialize SSM client

    ssm_client = boto3.client("ssm", region_name=region)


    for file_path in file_paths:

        # Read and encode each file content in Base64

        with open(file_path, "rb") as file:

            file_content = file.read()

        base64_content = base64.b64encode(file_content).decode("utf-8")

        

        # Extract filename to be saved on EC2

        filename = os.path.basename(file_path)

        

        # Prepare SSM command with the Base64 content

        commands = [

            f'echo "{base64_content}" | base64 -d > /home/ec2-user/{filename}'

        ]

        

        # Send the command to EC2 via SSM

        response = ssm_client.send_command(

            DocumentName="AWS-RunShellScript",

            Parameters={"commands": commands},

            InstanceIds=[instance_id]

        )

        

        # Fetch command ID to track

        command_id = response["Command"]["CommandId"]

        

        print(f"File '{filename}' is being transferred to instance '{instance_id}' as '/home/ec2-user/{filename}'.")


# Example usage

file_paths = ["test1.py", "test2.py"]  # Replace with paths to your files

instance_id = "i-xxxxxxxxxxxxxxx"  # Replace with your EC2 instance ID

region = "us-east-1"  # Replace with your AWS region if different


transfer_files_to_ec2(file_paths, instance_id, region)


virtual environment-like setup for Java

 

1. Download All Required Dependencies

Use Apache Maven to download and cache all dependencies.

  1. Open the terminal and navigate to your project directory.
  2. Run the following command:

bash

mvn dependency:go-offline

This command will pre-download all the dependencies required by your project and cache them locally in the .m2 directory (default Maven repository location).


2. Package the Application into a JAR

  1. Build a fat (or "uber") JAR that bundles your project code and all dependencies into a single file.

Add the Maven Shade plugin to your pom.xml to create this fat JAR:

xml

 

<build>

    <plugins>

        <plugin>

            <groupId>org.apache.maven.plugins</groupId>

            <artifactId>maven-shade-plugin</artifactId>

            <version>3.4.0</version>

            <executions>

                <execution>

                    <phase>package</phase>

                    <goals>

                        <goal>shade</goal>

                    </goals>

                </execution>

            </executions>

        </plugin>

    </plugins>

</build>

  1. Build the fat JAR:

bash

mvn clean package

The resulting JAR will be located in the target directory (e.g., target/kafka-consumer-1.0-SNAPSHOT.jar).


3. Prepare the Offline Virtual Environment

To simulate a virtual environment, package the JAR file along with the required tools for offline execution:

  1. Create a directory to store everything:

bash

 

mkdir kafka-consumer-offline

cd kafka-consumer-offline

  1. Copy the JAR file:

bash

 

cp path/to/target/kafka-consumer-1.0-SNAPSHOT.jar kafka-consumer-offline/

  1. Copy the .m2 Maven repository (contains all dependencies):

bash

 

cp -r ~/.m2/repository kafka-consumer-offline/m2-repository

  1. Create a run.sh script to execute the project in this self-contained environment:

bash

 

echo '#!/bin/bash

export MAVEN_OPTS="-Dmaven.repo.local=$(pwd)/m2-repository" java -jar kafka-consumer-1.0-SNAPSHOT.jar ' > run.sh chmod +x run.sh

yaml

 

 

---

 

## **4. Transfer and Execute the Project in an Offline Environment**

 

1. Transfer the `kafka-consumer-offline` directory to the offline machine.

 

2. On the offline machine, navigate to the directory:

```bash

cd kafka-consumer-offline

  1. Execute the project using the run.sh script:

bash

 

./run.sh


5. Testing the Environment

Before transferring the environment to the offline machine:

  • Disconnect your internet or simulate the offline environment.
  • Test the run.sh script locally to ensure all dependencies are properly bundled.

 

Nodejs virtual environemnt

 

1. Use npm and Local node_modules

By default, Node.js projects are isolated at the project level through their node_modules directory. When you run npm install in a project, all dependencies are installed locally in the project's node_modules folder.

Steps:

  1. Create a new project directory:

bash

mkdir my-nodejs-project

cd my-nodejs-project

 

  1. Initialize a new package.json file:

bash

npm init -y

  1. Install dependencies:

bash

npm install kafka-node google-protobuf

  1. All dependencies will be stored locally in the node_modules folder, and your package.json will track them. This isolates the project environment.
  2. To run your script:

bash

This is equivalent to creating a Python virtual environment, as dependencies will only affect the current project.