|
Today, we’re announcing three new features for Amazon S3 Storage Lens that will give you deeper insight into your storage performance and usage patterns. With the addition of performance metrics, support for analyzing trillions of prefixes, and direct export to Amazon S3 tables, you have the tools you need to optimize application performance, reduce costs, and make data-driven decisions about your Amazon S3 storage strategy.
New categories of performance metrics
S3 Storage Lens now includes eight new categories of performance metrics to help identify and address performance constraints in your organization. These are available at the Organization, Account, Segment and Prefix level. For example, the service helps you identify small objects in a bucket or prefix that can slow down your application’s performance. This can be mitigated by batching small objects or by using the Amazon S3 Express One Zone storage class for higher performance when working with small objects.
To access the new performance metrics, you must enable performance metrics in the S3 Storage Lens advanced level when creating a new Storage Lens dashboard or modifying an existing configuration.
| Metric category | Details | Use case | Alleviation |
| Read the size of the request | Distribution of read request (GET) sizes by days | Identify a dataset with small read request patterns that slow down performance | Small Requirement: Batch small objects or use Amazon S3 Express One Zone for high performance small object workloads |
| Write the size of the request | Breakdown of write request sizes (PUT, POST, COPY and UploadPart) by days | Identify a dataset with small patterns of write requests that slow down performance | Big request: Parallelize requests, use MPU or AWS CRT |
| Storage size | Distribution of object sizes | Identify the dataset using tiny little objects that slow down performance | Small object sizes: Consider grouping small objects |
| Concurrent PUT 503 errors | A count of 503s due to a concurrent PUT operation on the same object | Identify prefixes with current PUT restrictions that slow down performance | For one writer, adjust the replay behavior or use Amazon S3 Express One Zone. For multiple authors, use a consensus mechanism or use Amazon S3 Express One Zone |
| Data transfer across regions | Bytes transferred and requests sent across Region, in Region | Identify potential performance and cost reductions due to cross-region data access | Co-location of compute with data in the same AWS region |
| Access to unique objects | The number or percentage of unique objects accessed per day | Identify datasets where a small subset of objects are frequently accessed. These can be moved to a higher performance storage tier for better performance | Consider moving active data to Amazon S3 Express One Zone or other caching solutions |
| FirstByteLatency (existing Amazon CloudWatch metric) | Daily average of the first byte latency metric | The average daily time per request from when a complete request is received to when a response begins to be returned | |
| TotalRequestLatency (existing Amazon CloudWatch metric) | Daily average total request latency | The daily average elapsed time of the request from the first byte received to the last byte sent |
How it works
I select on the Amazon S3 console Create a Storage Lens dashboard to create a new dashboard. You can also modify an existing dashboard configuration. Then I configure the general settings like provisioning and Name of the dashboard, Positionand optional Tags. Then I will choose Other.

Next, I define the extent of the dashboard by selecting Include all areas and Include all segments and determining the areas and segments to be covered.

I am logging in Advanced level in the Storage Lens dashboard configuration, select Performance metricsthen select Other.

I choose next Aggregation of prefixes as another metric aggregation, then leave the rest of the information as default until I decide Other.

i choose Overview of Default Metricsthen Universal bucket as the bucket type and then select the Amazon S3 bucket in my AWS account as Target bucket. I will leave the rest of the information as default and then select Other.

I will review all the information before making a decision Submit complete the process.

After activating it, I will receive daily performance metrics right in the Storage Lens console dashboard. You can also choose to export the report in CSV or Parquet format to any segment in your account or publish to Amazon CloudWatch. Performance metrics are aggregated and published daily and will be available at multiple levels: organization, account, group, and prefix. In this drop down menu I will select PUT 503 Concurrent Error % for MetricLast 30 days for Periodand 10 for Top N buckets.

The Concurrent PUT 503 error count metric tracks the number of 503 errors generated by simultaneous PUT operations to the same object. Constraint errors can reduce application performance. For a single writer, modify the retry behavior or use a higher performance storage tier such as Amazon S3 Express One Zone to mitigate concurrent PUT 503 errors. For a multi-writer scenario, use a consensus mechanism to avoid concurrent PUT 503 errors or use a higher performance storage tier such as Amazon S3 Express One Zone.

Complete analytics for all prefixes in your S3 buckets
S3 Storage Lens now supports parsing all prefixes in your S3 buckets through a new Extended Prefix Metrics Overview. This capability removes previous restrictions that limited parsing to prefixes meeting a size threshold of 1% and a maximum depth of 10 levels. Now you can track up to billions of prefixes per segment for analysis at the most granular level of prefixes, regardless of size or depth.
The Expanded prefixes metrics report includes all existing categories of S3 Storage Lens metrics: storage usage, activity metrics (requests and transferred bytes), data protection metrics, and detailed status code metrics.
How to start
I follow the same steps described in How it works section to create or update a Storage Lens dashboard. In step 4 in the console where you select export options, you can select new ones Extended Prefix Metrics Overview. I can then export a summary of advanced prefix metrics in CSV or Parquet format to any universal segment in my account to efficiently query my Storage Lens data.

Good to know
This enhancement addresses scenarios where organizations need granular visibility across their entire prefix structure. For example, you can identify prefixes with incomplete multipart uploads to reduce costs, track compliance across the entire prefix structure for encryption and replication requirements, and detect performance issues at the most granular level.
Export S3 Storage Lens metrics to S3 tables
S3 Storage Lens metrics can now be automatically exported to S3 Tables, a fully managed feature on AWS with built-in Apache Iceberg support. This integration provides daily automatic delivery of metrics to AWS-managed S3 tables for instant querying without the need for additional processing infrastructure.
How to start
I’ll start by following the procedure described in step 5 on the console, where I’ll select the export destination. This time I choose Extended Prefix Metrics Overview. In addition to the general purpose bucket, I choose Bucket table.
The new Storage Lens metrics are exported to new tables in the AWS managed bucket aws-s3.

i choose extended_prefix_activity_metrics table to display API usage metrics for extended prefix reports.

I can preview the table on the Amazon S3 console or use Amazon Athena to query the table.

Good to know
The integration of S3 Tables with S3 Storage Lens simplifies metric analysis using familiar SQL tools and AWS analytics services such as Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon Redshift without the need for data pipelines. Metrics are automatically organized for optimal querying with custom retention and encryption options to suit your needs.
This integration enables analysis across accounts and regions, creating custom dashboards, and correlating data with other AWS services. For example, you can combine Storage Lens metrics with S3 metadata to analyze prefix-level activity patterns and identify objects in prefixes using cold data that is suitable for moving to cheaper storage tiers.
For your AI agent workflows, you can use natural language to query S3 Storage Lens metrics on S3 Tables using the S3 Tables MCP Server. Agents can ask questions like “which buckets have grown the most in the last month?” or “show me storage costs by storage class” and get instant insight from your observability data.
Now available
All three enhancements are available in all AWS regions where S3 Storage Lens is currently offered (except China and AWS GovCloud (US) regions).
These features are included in the Amazon S3 Storage Lens Advanced tier at no additional charge beyond standard advanced pricing. When exporting S3 tables, you only pay for S3 table storage, maintenance, and queries. There is no additional charge for the export function itself.
For more information about Amazon S3 Storage Lens performance metrics, support for trillions of prefixes, and exporting to S3 tables, see the Amazon S3 User Guide. See the Amazon S3 pricing page for pricing details.
Veliswa Boya.