Highlights From the AWS Storage Day

in DevOps , Cloud Computing

New TCIO June 14 2021.png

Amazon AWS hosted its annual storage event, and participants witnessed the launch of various new features that help scale product development and storage on cloud services. These products are focused on giving AWS leverage over other cloud services as it shows the constant upgrade of the services yearly.


    Amazon has been hosting the AWS storage day annually for three years now, with the first one held in 2019 and the last one on the second day of September 2021. The event is held yearly to announce updates and launches as well as steps being taken to improve storage services for users. This year's event attracted over 16,000 participants.

    At this year's event, the leaders took the opportunity to make some major announcements and shared exclusive information with participants across the globe. The announcements included the launching of different innovative facilities in the company to optimize how AWS operates. The managers at AWS allows developers create the infrastructure that helps them continuously scale to meet the needs of the ever-changing market. The need for developers to continually build without restriction propel AWS to transform storage capacities.

    A peek into the star product of the Event.

    source: https://aws.amazon.com/blogs/aws/new-amazon-fsx-for-netapp-ontap/ source: https://aws.amazon.com/blogs/aws/new-amazon-fsx-for-netapp-ontap/

    The Amazon FSx for NetApp ONTAP was undoubtedly the most anticipated launch during the build-up to the event. It's a very extraordinary cloud service, and here is why; it has similar features to the other Amazon FSx for Windows file server and Lustre.

    AWS allying with NetApp to build a product was considered a very timely move because the debut of ONTAP platforms to the AWS cloud storage portfolio created the avenue for enterprise workloads to be migrated to the cloud. This feature provides adequate support for all enterprise workloads, and users can now move more of their workloads and improve the rate of innovation on the cloud.

    ONTAP now provides more usefulness as it has extended its service beyond On-premises uses and can be used as a cloud-backed data management platform. All of ONTAP's features, including replication management, and snapshots can be used to efficiently manage all DevOps pipelines while still being cost-effective with storage.

    It's a service that works entirely well when used on the ONTAP platform with all other ONTAP deployments previously deployed. This includes on-premise deployments and cloud-based deployment (developed by the AWS and NetApp developments team).

    It has many use cases, like giving its users the ability to perform hybrid deployments that allow business continuity, cloud bursting using both local and cloud-based cache, and ease of migrating to the cloud for users still using on-premise services.

    The FSx has efficient capacity management, allows users to choose the connectivity, and provides integration points. It also provides the users with high availability and is built with the AWS multi-availability zone. AWS is known to have more than one availability zone per region and offer a strong availability service to their users. In general, users are provided with all of NetApp's file system technology features now entirely operated on the AWS cloud service.

    Other products launched.

    Among other products launched by Amazon at this event was the Amazon Elastic File System Tiering. This is Amazon further incorporating serverless settings in its platforms. The tiering is done automatically and wholly managed for the users. This product allows users with shared file storage to optimize their cost options. The feature works in such a way that once enabled, all of the users' files would be automatically stored in the appropriate cost option at the proper time. Users can save money across shared file storage.

    The Elastic File System Tiering uses its lifecycle management feature to scan workload patterns and upgrades or downgrade a file between storage options. Users who have previously been using the standard EFS storage pricing would save money on files that can be stored in cheaper tiers. It notices pattern changes and affects these changes to the storage options.

    The Amazon S3 Multi-Region Access Points was launched, and it allows users to build a product across different regions and would operate without having to add complexities to their applications. The feature allows its users to specify endpoints across the globe, and they can span buckets in multiple AWS regions. You can build applications that span different regions the same way you build applications that only need a single region.

    The S3 intelligent tiering was upgraded, and it optimizes costs on storage options, and the minimum storage requirements have been removed. The S3 intelligent tiering storage can be used for all of a user's workload with constantly changing patterns. The minimum duration and automation charges for files and objects less than 128kb have been removed.

    We had previously discussed more tools and checklists that can be used to optimize costs on AWS.

    For users who need to pre-process their data, the AWS transfer family manages workloads in ways that take up the major art of pre-processing this data. It sets the required infrastructure for the codes to run before the files are uploaded and checks for errors constantly. When changes are made, the workflow will verify if these changes have been logged. It allows for configurations to be done so that all vital tasks can operate in the background.

    The AWS Backup Manager launched allows the users to customize their backup needs across various AWS-supported services. Controls like frequency and period for backup can be automated and predefined. The backup manager helps users to constantly monitor changes affected and let when a backup does not follow the predetermined parameters set. Users can also auto-generate reports for audits when required. Check out all resources we think you should back up and how you can do that.

    The Amazon EBS direct APIs were also updated to increase the support provided for the creation of snapshots. Users can create snapshots of up to 64 Terabytes from even on-premises storage, which is the largest snapshot allowance available. To get more about the event, a complete rundown was uploaded on the AWS blog here.

    The AWS cheat sheet also contains more commands that help users make the most of the AWS  cloud servers for more ways to optimize your cloud subscriptions.


    Get similar stories in your inbox weekly, for free



    Share this story with your friends
    editorial
    The Chief I/O

    The team behind this website. We help IT leaders, decision-makers and IT professionals understand topics like Distributed Computing, AIOps & Cloud Native

    Latest stories


    DevOps and Downed Systems: How to Prepare

    Downed systems can cost thousands of dollars in immediate losses and more in reputation damage …

    Cloud: AWS Improves the Trigger Functions for Amazon SQS

    The improved AWS feature allows users to trigger Lambda functions from an SQS queue.

    Google Takes Security up a Notch for CI/CD With ClusterFuzzLite

    Google makes fuzzing easier and faster with ClusterFuzzLite

    HashiCorp Announces Vault 1.9

    Vault 1.9 released into general availability with new features

    Azure Container Apps: This Is What You Need to Know

    HTTP-based autoscaling and scale to zero capability on a serverless platform