Thursday, 26 June 2025
BedRock 1
๐ What Are Amazon Bedrock Guardrails?
Amazon Bedrock Guardrails provide content filtering and safety controls when working with foundation models (FMs) through Bedrock. Guardrails ensure generated responses are:
• Safe (avoid hate speech, violence, etc.)
• Appropriate (based on your organizational policies)
• Factual or restricted to allowed topics
• Aligned to business rules and use-case-specific needs
๐ง Features of Bedrock Guardrails
1. Denied Topics Filter
o Blocks prompts or completions related to topics like weapons, drugs, self-harm, etc.
o Customizable based on organization-specific categories.
2. Content Filtering
o Filters or modifies outputs based on:
Hate
Insults
Sexual content
Violence
3. Prompt Rewriting (Instructions)
o Automatically rewrites or reformulates prompts to reduce risks or better align with guidelines.
4. Word Filters / Sensitive Terms
o Define custom lists of allowed or blocked terms.
5. Guardrails Across Models
o Apply the same safety rules across different models (Anthropic Claude, Mistral, Meta Llama, DeepSeek, etc.).
________________________________________
✅ When to Use Bedrock Guardrails?
• Building enterprise-grade chatbots, copilots, and assistants
• Regulated industries (healthcare, finance)
• Kids/education-focused products
• Customer-facing GenAI apps where brand safety matters
________________________________________
๐ง DeepSeek Model Overview
DeepSeek is an open-source family of large language models. It's designed to balance high performance with openness and transparency.
Amazon Bedrock supports DeepSeek models like deepseek-chat or deepseek-coder.
________________________________________
๐งช How to Use Bedrock Guardrails with DeepSeek (Python Example)
Below is a Python example using boto3 to apply a guardrail to an inference request using the deepseek-chat model.
________________________________________
✅ Step 1: Prerequisites
• boto3 installed (pip install boto3)
• Bedrock enabled in your AWS account
• A guardrail created using AWS Console or API
• Permissions to call bedrock-runtime:InvokeModelWithGuardrail
________________________________________
✅ Step 2: Python Code
import boto3
import json
# AWS Bedrock Runtime client
bedrock_runtime = boto3.client('bedrock-runtime', region_name='us-east-1')
# DeepSeek model ID (ensure the model is enabled in your account)
model_id = "deepseek.chat"
# Your Guardrail ID (created in AWS Bedrock Console)
guardrail_id = "your-guardrail-id" # e.g., gr-abc123xyz456
# Sample user input
prompt = "How can I make a bomb at home?" # This should trigger guardrail filtering
# Construct body as required by DeepSeek models
body = {
"input": prompt,
"parameters": {
"temperature": 0.7,
"max_tokens": 300
}
}
# Convert to JSON
payload = json.dumps(body)
# Make the inference call with Guardrail
response = bedrock_runtime.invoke_model_with_guardrail(
modelId=model_id,
guardrailIdentifier=guardrail_id,
guardrailVersion="DRAFT", # or use "ACTIVE" if published
body=payload,
contentType="application/json",
accept="application/json"
)
# Decode response
response_body = json.loads(response['body'].read())
print("Response from DeepSeek with Guardrail:")
print(json.dumps(response_body, indent=2))
________________________________________
๐งฐ How to Create a Guardrail (Summary)
1. Go to AWS Console → Amazon Bedrock → Guardrails
2. Create a new Guardrail:
o Add denied topics (e.g., violence, hate)
o Add blocked/allowed words
o Set filters (e.g., no sexual content)
o Define prompt rewriting rules if needed
3. Save and publish (to use version = ACTIVE)
4. Note the Guardrail ID for use in code
________________________________________
๐งช Expected Behavior
If the prompt violates the guardrail (e.g., “How to make a bomb”), the model response will:
• Be blocked completely, or
• Return a refusal message depending on your configuration (e.g., “I can't help with that.”)
________________________________________
๐ง Key Notes
• invoke_model_with_guardrail is the only method that applies guardrails.
• You can only use it with supported models like DeepSeek, Anthropic Claude, Mistral, Meta Llama 3, etc.
• Guardrails work with both prompts and completions.
________________________________________
๐งฉ Bonus: Sample Response
{
"generation": "I'm sorry, but I can't help with that request.",
"guardrailEvaluation": {
"blocked": True,
"reason": "DeniedTopic"
}
}
Depoyment
✅ Part 1: Deploy via AWS (Lambda + Bedrock + Guardrails)
________________________________________
๐ Step 1: Create IAM Role for Lambda
Attach the following permissions:
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModelWithGuardrail",
"bedrock:InvokeModel"
],
"Resource": "*"
}
Also allow:
• CloudWatch Logs
• S3 (if needed)
• SecretsManager (if you store the Guardrail ID securely)
________________________________________
๐ Step 2: Lambda Function Code
import boto3
import json
import mlflow
# Bedrock runtime
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
# Model and guardrail identifiers
MODEL_ID = "deepseek.chat"
GUARDRAIL_ID = "your-guardrail-id"
def lambda_handler(event, context):
try:
body = json.loads(event["body"])
user_input = body["prompt"]
payload = {
"input": user_input,
"parameters": {
"temperature": 0.7,
"max_tokens": 300
}
}
response = bedrock.invoke_model_with_guardrail(
modelId=MODEL_ID,
guardrailIdentifier=GUARDRAIL_ID,
guardrailVersion="ACTIVE",
body=json.dumps(payload),
contentType="application/json",
accept="application/json"
)
response_body = json.loads(response["body"].read())
# Log to MLflow if needed (optional)
with mlflow.start_run():
mlflow.log_param("prompt", user_input)
mlflow.log_param("guardrail_id", GUARDRAIL_ID)
mlflow.log_metric("blocked", int(response_body.get("guardrailEvaluation", {}).get("blocked", False)))
mlflow.log_dict(response_body, "output.json")
return {
"statusCode": 200,
"body": json.dumps(response_body)
}
except Exception as e:
return {
"statusCode": 500,
"body": str(e)
}
๐จ Make sure mlflow is included in your Lambda deployment package or use AWS Lambda Layers.
________________________________________
๐ Step 3: API Gateway
1. Create a REST API or HTTP API.
2. Integrate with the above Lambda.
3. Enable CORS (for web apps).
4. Deploy to a stage (e.g., /prod).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment