Mastering Amazon Bedrock Security: Encrypting Custom Models for FinOps Governance

Overview

The adoption of Generative AI is transforming how businesses innovate, with services like Amazon Bedrock making it easier than ever to build custom models trained on proprietary data. As organizations fine-tune foundation models with sensitive information, these AI artifacts become high-value intellectual property and regulated assets that demand robust protection.

A critical, yet often overlooked, aspect of this process is data encryption. By default, AWS provides a baseline level of encryption for custom models, but this relies on keys managed by AWS. True enterprise-grade security and governance require a more proactive approach: using customer-managed keys.

This shift is not merely a technical detail; it is a fundamental component of a mature FinOps and cloud governance strategy. Enforcing customer-managed encryption ensures that you retain full control over your data’s lifecycle, meet stringent compliance requirements, and establish clear lines of accountability for your most valuable AI assets. In this article, we explore why this control is non-negotiable for any organization serious about securing its AI investments in AWS.

Why It Matters for FinOps

Implementing strong encryption practices for Amazon Bedrock directly impacts the financial and operational health of your cloud program. While it may not reduce your monthly cloud bill, it significantly mitigates costly risks and aligns with core FinOps principles of governance and accountability.

The business impact of relying on default encryption is substantial. Non-compliance can result in failed audits, delaying sales cycles and potentially leading to heavy regulatory fines under frameworks like GDPR, HIPAA, and PCI-DSS. The operational drag of remediating these findings or responding to a security incident is also a hidden cost, diverting engineering resources from value-generating work.

From a governance perspective, using customer-managed keys enables clear ownership and supports effective chargeback or showback models. By associating specific keys with specific projects or business units, you can accurately allocate security and infrastructure costs. This ensures that the teams developing custom AI models are also accountable for their security posture, fostering a culture of shared responsibility.

What Counts as “Idle” in This Article

In the context of this article, we define an "idle" resource not by its CPU or network utilization, but by its governance state. A custom model in Amazon Bedrock is considered "governance-idle" or "at-risk" when it is protected by default, AWS-owned encryption keys.

This state represents a passive security posture. The organization has ceded control over the key’s lifecycle, access policies, and auditability to the cloud provider. The resource is idle from a management perspective because it is not actively governed under the organization’s specific security and compliance policies.

Key signals of this at-risk state include a model’s configuration showing an "AWS owned key" for encryption and a corresponding lack of specific, granular key usage events in audit logs. These models represent a gap in control and a potential compliance failure waiting to be discovered.

Common Scenarios

Scenario 1

A healthcare organization fine-tunes an Amazon Bedrock model on thousands of patient records to improve clinical note summarization. If they use default encryption, they cannot definitively prove cryptographic erasure of the model if required by HIPAA regulations. Mandating a customer-managed key allows them to control the key’s lifecycle, ensuring they can permanently revoke access to the protected health information (PHI) embedded within the model.

Scenario 2

A financial services firm develops a proprietary fraud detection model using years of transaction data. This model is a critical piece of intellectual property. By using a customer-managed key with automated rotation, the firm protects its algorithm from unauthorized access and aligns with strict PCI-DSS requirements for key management, ensuring a compromise of one layer of security does not expose the asset.

Scenario 3

A legal tech company trains a model on confidential client contracts to build a contract analysis tool. To provide security assurance to their clients, they use separate customer-managed keys for different tenants. This isolates each client’s data, ensuring that the model artifacts are cryptographically separated and that access can be audited and controlled on a per-client basis.

Risks and Trade-offs

The primary trade-off in enforcing customer-managed encryption is balancing operational effort against security and compliance risk. Unlike a simple configuration change, remediating a model with default encryption requires re-running the entire model customization job with the correct key settings. This process consumes time and resources, and it must be carefully planned to avoid disrupting production workflows.

A significant risk is misconfiguring the key policies. If the policy on a customer-managed key is too restrictive, it can inadvertently block legitimate applications or data science teams from accessing the model, leading to availability issues. Conversely, a policy that is too permissive defeats the purpose of the control.

However, the risk of inaction is far greater. Failing to use customer-managed keys leaves valuable intellectual property and sensitive data vulnerable, creates significant compliance gaps, and weakens your overall data governance posture. The operational cost of remediation is a planned expense, whereas the cost of a data breach or failed audit is unpredictable and often catastrophic.

Recommended Guardrails

To proactively manage Amazon Bedrock security, organizations should establish clear governance and technical guardrails. These controls help prevent the creation of non-compliant resources and simplify the management of AI assets.

  • Policy Mandates: Establish a firm policy that all custom models in Amazon Bedrock must be encrypted with a customer-managed key. Use AWS Service Control Policies (SCPs) or IAM policies to enforce this at the organizational level.
  • Tagging Standards: Implement a consistent tagging strategy for all KMS keys and Bedrock models. Tags should include identifiers for the owner, project, cost center, and data sensitivity level to facilitate ownership tracking and cost allocation.
  • Centralized Key Management: Designate a central team, such as cloud security or platform engineering, to manage the lifecycle of KMS keys. Application and data science teams should be key users, not administrators.
  • Automated Alerts: Configure monitoring to detect and alert on the creation of any new custom model that is not using a customer-managed key. This enables rapid response before the non-compliant resource becomes deeply integrated into production systems.

Provider Notes

AWS

Amazon Bedrock is a fully managed service that provides access to foundation models for building generative AI applications. When you create a custom model through fine-tuning, the resulting model artifacts are stored and encrypted at rest. The key to strong governance lies in how that encryption is managed.

AWS provides two primary options for this: default encryption using an AWS-owned key, or enhanced control using AWS Key Management Service (KMS). By creating and specifying a Customer-Managed Key (CMK), you gain direct control over the key’s access policies, rotation schedule, and lifecycle.

Every use of a CMK is logged in AWS CloudTrail, providing a detailed and immutable audit trail of who accessed your model data and when. This level of visibility is essential for security forensics and compliance reporting. For detailed information, refer to the official documentation on data protection in Amazon Bedrock.

Binadox Operational Playbook

Binadox Insight: Customer-managed encryption for AI models is non-negotiable for enterprise governance. It transforms security from a passive default setting into an active, auditable control that protects your most critical digital assets and aligns with FinOps principles of ownership and accountability.

Binadox Checklist:

  • Audit all existing Amazon Bedrock custom models to identify those using default encryption.
  • Establish a standardized KMS key policy template specifically for Bedrock resources.
  • Implement IAM guardrails to prevent the creation of new custom models without a specified customer-managed key.
  • Tag all KMS keys and associated Bedrock models with owner and cost center information for effective showback.
  • Develop a documented runbook for re-training and replacing non-compliant models.
  • Update internal security and compliance documentation to officially mandate the use of CMKs for all AI model artifacts.

Binadox KPIs to Track:

  • Percentage of custom models encrypted with customer-managed keys.
  • Mean-time-to-remediate for newly discovered non-compliant models.
  • Number of audit findings related to data encryption and key management.
  • Cost of rework associated with re-training models to achieve compliance.

Binadox Common Pitfalls:

  • Forgetting to grant the Bedrock service principal (bedrock.amazonaws.com) the necessary permissions in the KMS key policy.
  • Applying overly restrictive key policies that block legitimate application or user access, causing service disruptions.
  • Neglecting to decommission the old, non-compliant models after deploying their replacements, leaving a security risk on the account.
  • Failing to account for the time and compute costs of re-training models in project planning and budgets.

Conclusion

Securing your generative AI investments in AWS requires moving beyond default settings. Enforcing the use of customer-managed keys for Amazon Bedrock custom models is a foundational practice for any organization that values data sovereignty, regulatory compliance, and robust governance.

By taking control of your encryption keys, you gain granular control over data access, create a verifiable audit trail, and enable critical security capabilities like cryptographic erasure. While this requires a proactive approach and initial operational effort, it establishes a secure and scalable foundation for your AI initiatives. The next step is to audit your environment, establish clear guardrails, and ensure that every custom model is an asset you fully control.