Navigating Amazon Security Lake: Key Considerations for Successful Implementation.

Amazon Security Lake: A Revolutionary Solution for Enhanced Data Protection

Avoid common pitfalls and maximize the potential of Amazon Security Lake for robust data protection and threat detection.

Amazon Web Services (AWS) has recently announced the general availability of Amazon Security Lake, a groundbreaking solution that offers enhanced data protection capabilities. In this blog, we will explore the key features and benefits of Amazon Security Lake and discuss how it can revolutionize data security practices for organizations.

Amazon Security Lake is a comprehensive security analytics and threat detection solution offered by AWS. It enables organizations to centralize, analyze, and act upon security data from various sources, such as AWS CloudTrail logs, Amazon VPC Flow Logs, and AWS Config rules. By leveraging machine learning and advanced analytics, Security Lake empowers organizations to gain valuable insights into their security posture and detect potential threats in real-time.

Key Features and Benefits:

  1. Centralized Security Data Repository: Amazon Security Lake acts as a centralized repository for all security-related data, making it easier for organizations to manage and analyze vast amounts of security logs. With this unified approach, organizations can seamlessly integrate security data from various sources, eliminating data silos and enhancing visibility into their overall security landscape.
  2. Real-time Threat Detection: By employing advanced machine learning algorithms, Security Lake enables organizations to proactively detect security threats in real-time. It continuously monitors and analyzes security logs, identifying anomalous activities, unauthorized access attempts, and potential security breaches. This allows organizations to respond promptly and effectively to mitigate risks and prevent data breaches.
  3. Scalable and Flexible Architecture: Built on AWS’s highly scalable infrastructure, Security Lake is designed to accommodate organizations of all sizes. It can effortlessly handle large volumes of security data, ensuring that organizations can scale their security operations without compromising performance. Additionally, Security Lake offers flexible deployment options, allowing organizations to choose between fully managed services or self-managed implementations based on their specific requirements.
  4. Simplified Investigation and Compliance: Amazon Security Lake provides powerful search and query capabilities, enabling security teams to investigate incidents and conduct forensic analysis efficiently. The solution offers pre-built dashboards, visualizations, and security analytics tools, making it easier for organizations to gain actionable insights from their security data. Moreover, Security Lake assists organizations in meeting regulatory compliance requirements by providing pre-configured compliance rules and facilitating security audits.
  5. Integration with AWS Security Services: As part of the AWS ecosystem, Security Lake seamlessly integrates with other AWS security services, such as Amazon GuardDuty and AWS Security Hub. This integration strengthens an organization’s overall security posture by leveraging the combined power of multiple security tools, improving threat detection and response capabilities.

Step-by-step guide,

We will walk you through the process of setting up Amazon Security Lake.

Step 1: Sign in to the AWS Management Console: To begin, sign in to the AWS Management Console using your AWS account credentials. If you don’t have an AWS account, you can create one by following the instructions on the AWS website.

Step 2: Navigate to the Amazon Security Lake Console: Once you are logged in, navigate to the Amazon Security Lake Console. You can find the console by searching for “Security Lake” in the AWS services search bar, or you can access it directly via the following URL:

Step 3: Create a Security Lake: In the Security Lake console, click on the “Create Security Lake” button to start the setup process. You will be prompted to provide a name for your Security Lake and choose a region where it will be deployed. Select the appropriate region based on your organization’s requirements and click “Next.”

Step 4: Configure Data Sources: In this step, you will configure the data sources for your Security Lake. Security Lake supports various AWS data sources, such as CloudTrail logs, VPC Flow Logs, and AWS Config rules. Choose the data sources that you want to integrate with Security Lake by enabling them and providing the necessary permissions. Follow the on-screen instructions to configure each data source.

Step 5: Set Up Data Ingestion: Next, you will set up data ingestion for your Security Lake. Select the desired method of data ingestion based on your organization’s needs. You can choose between real-time ingestion using Amazon Kinesis Data Firehose or batch ingestion using Amazon S3. Configure the necessary settings for data ingestion, such as the destination S3 bucket or Kinesis Firehose delivery stream.

Step 6: Enable Data Analysis: Once the data ingestion is set up, you can enable data analysis for your Security Lake. Security Lake uses machine learning algorithms and advanced analytics to detect security threats and anomalies in real-time. Enable the desired analysis features, such as anomaly detection or specific AWS service integrations, to enhance your security capabilities.

Step 7: Configure Security Lake Settings: In this step, you can configure additional settings for your Security Lake. This includes defining retention periods for the ingested data, setting up access controls and permissions, and configuring notifications for security events. Adjust these settings according to your organization’s compliance and security requirements.

Step 8: Review and Create the Security Lake: Before creating the Security Lake, review all the configuration settings you have made. Ensure that the selected data sources, data ingestion methods, analysis features, and settings align with your organization’s needs. Once you are satisfied, click “Create Security Lake” to initiate the creation process.

Step 9: Monitor and Manage Your Security Lake: After the Security Lake is created, you can monitor and manage it from the Security Lake console. Utilize the pre-built dashboards, visualizations, and security analytics tools provided by Security Lake to gain valuable insights into your security posture. Continuously monitor the alerts and notifications generated by the system to promptly respond to potential security threats.


When setting up Amazon Security Lake, there are a few potential “gotchas” or challenges that you may encounter. Being aware of these pitfalls can help you navigate the setup process more effectively. Here are some important considerations:

  1. Data ingestion limitations: Amazon Security Lake supports various data sources, but each has its own limitations. For example, CloudTrail logs have a maximum size limit per file, and VPC Flow Logs have a limit on the number of records per file. Ensure that you understand and plan for these limitations to avoid potential issues with data ingestion.
  2. Permissions and access control: Configuring proper permissions and access control is crucial for Security Lake. Ensure that you grant the necessary permissions to the AWS services and resources involved in data ingestion and analysis. Additionally, make sure that you set up appropriate access controls for users and roles to prevent unauthorized access to your Security Lake.
  3. Data storage costs: While Security Lake provides a powerful solution for data analysis, keep in mind that storing large volumes of security data can incur additional costs. Be mindful of the storage costs associated with S3 buckets or Kinesis Data Firehose delivery streams, especially if you have high data ingestion rates or long retention periods. Regularly review and optimize your data storage practices to manage costs efficiently.
  4. Performance considerations: The performance of Security Lake can be influenced by factors such as data ingestion rates, analysis complexity, and the size of your security data. If you have a high volume of data or complex analysis requirements, you may need to carefully allocate resources and optimize your Security Lake configuration to ensure smooth and efficient operation.
  5. Security Lake limitations: While Security Lake offers robust security analytics capabilities, it is important to understand its limitations. For example, Security Lake may not cover all types of security logs or support custom log formats. Evaluate your specific security needs and verify that Security Lake aligns with your requirements.
  6. Monitoring and alerts: Monitoring the alerts and notifications generated by Security Lake is crucial for timely threat detection and response. However, it’s essential to set up effective monitoring practices to avoid missing critical alerts or being overwhelmed by false positives. Regularly review and fine-tune your alerting mechanisms to ensure they are tuned to your organization’s security priorities.
  7. Integration challenges: Security Lake integrates with other AWS security services, such as GuardDuty and Security Hub. While this integration enhances your overall security capabilities, it may require additional configuration and management. Be prepared to address any potential challenges related to integration, including service compatibility, data sharing, and event correlation.
  8. Compliance considerations: If your organization operates in regulated industries or needs to comply with specific security standards, ensure that Security Lake meets the necessary compliance requirements. While Security Lake provides built-in compliance rules and tools, additional configurations or customization may be necessary to align with your specific compliance needs.

Remember that it is crucial to consult AWS documentation, user guides, and seek support from AWS experts to address any specific challenges or concerns you may encounter during the setup and configuration of Amazon Security Lake.



Share with your network and friends.

Best Path, eLearningSecurity -> OSCP -> HacktheBox

· Attack-Defense –
· CTF Komodo Security –
· CryptoHack –
· CMD Challenge –
· Exploitation Education –
· Google CTF –
· HackTheBox –
· Hackthis –
· Hacksplaining –
· Hacker101 –
· Hacker Security –
· Hacking-Lab –
· ImmersiveLabs –
· NewbieContest –
· OverTheWire –
· Practical Pentest Labs –
· Pentestlab –
· Hackaflag BR –
· Penetration Testing Practice Labs –
· PentestIT LAB –
· PicoCTF –
· Root-Me –
· Root in Jail –
· SANS Challenger –
· SmashTheStack –
· The Cryptopals Crypto Challenges –
· Try Hack Me –
· Vulnhub –
· W3Challs –
· WeChall –
· Zenk-Security –
· Cyberdefenders –
· LetsDefend-

Web Security Academy Series –


PNPT ($299-399) – Heath Adams and the TCM Security is by penetration testers for penetration testers, the Practical Network Penetration Tester.

OSCP ($1499) – One of many of Offensive Security‘s offerings and was (from all my conversations) maintained as the gold standard for quite some time, the Offensive Security Certified Professional.

HTB CPTS ($210) – Hack The Box even came out with their own recently, HTB Certified Penetration Test Specialist.

CRTO (399 GBP) – I’ve seen many more people grab this certification by Zero-Point Security Ltd, Certified Red Team Operator.

eJPT ($200) – One of many eLearnSecurity‘s offerings as well. Junior-level compared to their other offerings, eLearnSecurity Junior Penetration Tester. Their video content is amazing from what I remember during my access to the platform.

C|EH (???) – EC-Council‘s initial offering in their Ethical Hacking path while they also have a separate penetration testing path, Certified Ethical Hacker. Their website makes it appear like you must purchase training in order to certify,

VHL/VHL Advanced+ ($99.00 – $749.00) – Virtual Hacking Labs has a two-tiered model of certification based on the number and difficulty of machines you hack in their lab. Their course cost is varied based on the duration of lab access purchased (1, 3, 6, or 12 month durations). However, you get their course material with all lab access purchased. It’s great for a beginner wanting to grab the content and test the waters, VHL/VHL Advanced+ Certificate of Completion.

Pentest+ ($392-$977) – CompTIA‘s penetration testing course. Bundled with or without training.

Enterprise Cloud Security Best Practice Architecture

Enterprise Cloud Security Best Practice Architecture

Enterprise organisations with existing permitter high bandwith firewalls with high speed internet uplinks and solid ingress and egress security polices with full Application level and deep pact analysis, secure web gateways, following ITIL policies. The last thing you should do is open up your business to Public clouds that can create internet outbount/inbound links with a lick of a button, exposing your internal company.

The best way to mitigate teh risk of a public cloud data exposures;

  1. Effectively, Route all traffic through a enterprise grade fire wall for Administration of any and all SaaS and Cloud environments.
  2. Block all Public Cloud Internet and any new network connections
  3. Monitor and changes to these configuration.
  4. Minimise usage of high level access accounts only via strict change control and key management.
  5. Setup a DirectConnect into your internal corporate firewall and direct all internet traffic using you existing strict firewall policies and minoring.
  6. Only allow access to Cloud via your Corporate Private IP subnet VPN
  7. Enable IAM and MFA Access based on Corporate AD Connect to Cloud access.
  8. Build a policy to eliminate shadow IT

Digital Transformation (a study)

Digital Transformation (a study)


Digital Transformation, or DX for cool kids, is a way to describe, workplace modernisation, the move towards the use of Software-as-a-Service platforms, in pay-as-you-go models monthly billing in a more secure modern platforms and inline with the 4th industraial revulotions, intergrating colobration and use of cloud based platforms to enable colobration and hive mind to support your multi channel communcation with customers.

Workplace Transformation, Customer service and Digital Marketing integration

DX Business Value

  • Cost Savings
  • Staff productivity
  • Operational resilience
  • Business agility

Industry x0


Assessment and Strategy (ROI, Business Prioritisation. )

SD-WAN Transformation

Workplace Transformation

Infrastructure Transformation

Application Transforamtion


Public Cloud (Wars) Hyperscalers Comparison

Cloud vs On-prem Security Controls

The Cost of Cloud, a Trillion Dollar Paradox

Customers are realizing the significant cost of going public cloud (originally there was an expectation of with scale lower cost but opposite happened mainly due to Energy crisis).

Public Cloud (Wars) Hyperscalers Comparison

There are only three main global public cloud vendors AWS, Azure and Google Cloud. These three all have very interesting competitive advantages for the global Enterprise Market; Not just pokemon Go 🙂

  • AWS
    • Advantage
      • First to global market, absolute dominant leader in Public cloud, with the most advanced feature rich platform, at least 10 years ahead of Azure and Google Cloud. But, of course GCP and Azure are catching up quickly. The only options if you are building a global scale app.
    • Disadvantage
      • Incredibly complex and expensive to run non-aws optimised workloads and design.
      • Lack of Enterprise experience, Agile, DevOps is just a nice buzz word used in corporate world the reality is very different.
      • Most Enterprise workloads will require complete refactoring for migration, but VMware integration and NetApp CloudVolumes will make it allot easier for Enterprise Workload migration.
      • Lock in Architecture, once you build a AWS native app, it will be very difficult to migrate out.
      • Not all services meet ‘devils-in-details’ enterprise features. AWS WAF, it is a version of ModSecurity Opensource version, but very difficult to customise and can not compete with a F5 WAF features.
      • AWS product suit have complex limitations and wont figure until you a migrate.
      • AWS people are expensive. (like me)
      • AWS Availability Zone could be within multiply Datacenter, the customer is responsible to architect resilience using multiple regions, availability zones and backup your service. The key factor is that the AWS SLA promise is based at Region level. So it is vital to consider the AWS SLA into your design and cost estimates based on the SLA metrics.
  • Azure
    • Advantage
      • Every single Enterprise Customer already uses most Microsoft products ; Microsoft Office 365, Microsoft Active Directory, Microsoft Windows Operating Systems, Microsoft Storage Server, Microsoft Azure Stack, Microsoft Azure AD SSO. (These technologies provide the stickiness for Azure.)
      • Microsoft Windows Operating System, Microsoft Active Directory, and Microsoft Office 365 are used by almost every corporate customer in the world. As customer transition from on-premise to cloud and SaaS, they will move workloads to Office 365 and Azure AD, and then setup a tenant on Azure making it a very easy transition.
      • Microsoft also restricts some applications and Operating Systems, via licensing restrictions for other shared compute platforms other than its own Azure platform. Eg. Microsoft RDS and Windows 10 are only allowed on Azure. There are many other complex licensing issues that you will only figure out while reading all the licensing legal items.  (I have a number of articles discussing this on this blog.)
      • Microsoft is also enabling, on-premis Azure stack that will make it easy to deploy and transition from on-premis to Azure, including its own Microsoft Storage Server.
    • Disadvantage
      • Microsoft console is not as feature rich and the available features are rolled can cause  headaches, if you are not experienced enough to understand.
      • Microsoft technology takes a great deal of Expertise to maintain and to get working.
      • Microsoft Azure Stack and Hyper-v are not as high performance as VMware ESXi or AWS, at a very low level.
  • Google Cloud
    • Advantage
      • Google Ambient computing PWA, Google Chrome, Google DaRT, Google Firebase. Software will all be SaaS and consumption based. There is no reason for any customer to buy software, in the future we will all consume software from a Marketplace SaaS provider and multi-cloud. Google is heading towards market dominance without the ego of building a monopoly they are working with other competing partners to build outcomes for customers.
      • Google is looking far beyond the current market, they working towards infinite reach. It’s actually insane, if you think about this companies achievements and future..
      • Google Services are running on massive infrastructure globally and just like Amazon, their primary customer is themselfs.
      • They are taking a different approach to gaining market share, As google provide the most widely used browser, they are pushing PWA for development. The whole Google Cloud platform is very much accessible via a developers IDE. Its very easy to start to create a multi-platform application using a Google framework such as Flutter, AngularData and run up services using Google Firebase.. Connecting the developers IDE directly to the Google Cloud platform makes it a very easy options for DEVOPS and develop MVPs. That is the future, when a Developer can build a global scale app, with AI, BigData, Blockchain and whatever else, straight from a IDE and have everything created, like a WAF, CDN, etc. Utopia is what google is building for Software. (I want in, please.)
      • Google is actually cheaper than AWS
    • Disadvantage
      • Late to the game, they need to move fast and differentiate with AWS or Azure in terms of release of features.
      • Google is Search, Google Advertising company moving to Cloud/DC infrastructure applications, etc in the enterprise is a big giant leap. They will need to hire Enterprise Presales.

Update 10/11/19 based on recent research Google Cloud is far superior now to AWS.

  • Google Cloud allows you to depart from the predefined configurations as seen above and customize your instance’s CPU and RAM resources to fit your workload. These are known as custom machines. Other types include Google Cloud Preemptible VMs
  • GCP has higher performance for Storage
  • GCP is priced lower/competitively to AWS
    • Google Cloud Platform also launched their per second billing and Google seems to be slightly lower in pricing.
    • Comparison of Google Cloud Committed Use Discounts vs AWS Reserved Instances
    • Another really huge cost-saving discount that Google Cloud offers is what they call Sustained Use Discounts. These are automatic discounts that Google Cloud Platform provide the longer you use the instance, unlike with AWS where you have to reserve the instance for a long period of time.
  • GCP free tier with no time limits attached.
    • Google Cloud offers a $300 credit which lasts for 12 months. And as of March 2017, they also have a free tier with no time limits attached. Here is an example of an instance you could run forever for free with GCP.
      • f1-micro instance with 0.2 virtual CPU, 0.60 GB of memory, backed by a shared physical core. (US regions only)
      • 30 GB disk with 5 GB cloud storage
  • GCP Network Tiers – With Network Service Tiers, GCP is the first major public cloud to offer a tiered cloud network –
    • Premium Tier delivers GCP traffic over Google’s well-provisioned, low latency, highly reliable global network. This network consists of an extensive global private fiber network with over 100 points of presence (POPs) across the globe. By this measure, Google’s network is the largest of any public cloud provider. See the Google Cloud network map. GCP customers benefit from the global features within Global Load Balancing, another Premium Tier feature. You not only get the management simplicity of a single anycast IPv4 or IPv6 Virtual IP (VIP), but can also expand seamlessly across regions, and overflow or fail over to other regions.
    • Google Cloud Platform launched their separate premium tier and standard tier networks. This makes them the first major public cloud to offer a tiered cloud network. The premium tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network. Redundancy is key, and that’s why there are at least three independent paths (N+2 redundancy) between any two locations on the Google network, helping ensure that traffic continues to flow between the locations even in the event of a disruption.
  • GCP has lower latency than AWS, due to Google having its own backhaul fibre optics network, over Google’s backbone, not over the Internet.
    •  FASTER Cable System which gives Google access to up to 10Tbps (Terabits per second) of the cable’s total 60Tbps bandwidth between the US and Japan. They are using this for Google Cloud and Google App customers. The 9,000km trans-Pacific cable is the highest-capacity undersea cable ever built and lands in Oregon in the United States and two landing points in Japan. Google is also one of six members which have sole access to a pair of 100Gb/s x 100 wavelengths optical transmission strands between Oregon and Japan.
  • Google Cloud also has a unique feature with their ability to live migrate virtual machines. Benefits of live migrations allow for the engineers at Google to better address issues such as patching, repairing, and updating the software and hardware, without the need for you to worry about machine reboots
    • AWS provides Availability Zones and has concepts that your design needs to adhere to such as Availability and Durability.
    • Availability: refers to the ability of a system or component to be operational and accessible if required (system uptime). The availability of a system or component can be increased by adding redundancy to it. In case of a failure, the redundant parts prevent the failure of the entire system (e.g. database cluster with several nodes).
    • Durability: refers to the ability of a system to assure data is stored and data remains consistently on the system as long as it is not changed by legitimate access. Means that data should not get corrupted or disappears because of a system malfunction.
  •  Reference
  • GCP Security have been built over 15 years to protects its own service such as gmail and GCP has implement security as the core via GCP identify services and other features.
  • Google Firebase integration – Google provides Application Development Languages such as Angular, Go, DART and Fluter that enables developers to create high performance multi platform and native applications very quickly and integration with Google Firebase means that a develop can access the full capability of GCP via the IDE such as Visual Studio Code. The nirvana is the ability to develop a front end and back end app via the IDE, then connect and manage full capability of the GCP cloud via Firebase and the IDE/your application architecture.
  • GCP and Infrastructure as a Code has really good intergration with Hasiicorp tools and Anisible –
  • Google Kubernetes has advantages over AWS Container services for security and orchestration  and management
  • Google Cloud Platform has been Carbon neutral since 2017


There is still plenty of years left in traditional data centre technologies and new emerging scale-out and management platforms. You can easily design a server infrastructure with the latest tech that can be 1/10 of the cost of AWS and you can then sweat that asset for 10+ years. I worked on IBM non-stop servers and they are still going after 20+ years. That is pretty good ROI for static apps that don’t need to scale-out.

Enterprise Architecture for Digital Transformation is required, a CIO saying everything needs to go AWS is not the right move,  You need a proper assessment of your business, future strategy and current workloads. Orgainsations need to build a 10+ year strategy and work on a slow migration to cloud adoption and latest Data centre technologies. It’s not a blind pick a Cloud vendor and go.  Depending on your workloads, you maybe be better off staying inside a secure datacentre. I started selling consulting services for assessing workloads to transition to cloud. No to a single customer wanted this, selling a proper migration strategy into cloud is not something most organisation take seriously. Cost/Risk/Agilty is not a easier exercise, but assessing your current workloads is actually very simple with the advancement in cloud migration tools.IMO, if you have a multi-dc, branch and multi-cloud environment, you stuffed it up. You must lost all and any ROI!

Service comparisons

Service Category Service AWS Google Cloud
Compute IaaS Amazon Elastic Compute Cloud Compute Engine
  PaaS AWS Elastic Beanstalk App Engine
  FaaS AWS Lambda Cloud Functions
Containers CaaS Amazon Elastic Kubernetes Service, Amazon Elastic Container Service Google Kubernetes Engine
  Containers without infrastructure AWS Fargate Cloud Run
  Container registry Amazon Elastic Container Registry Container Registry
Networking Virtual networks Amazon Virtual Private Cloud Virtual Private Cloud
  Load balancer Elastic Load Balancer Cloud Load Balancing
  Dedicated interconnect AWS Direct Connect Cloud Interconnect
  Domains and DNS Amazon Route 53 Google Domains, Cloud DNS
  CDN Amazon CloudFront Cloud CDN
  DDoS firewall AWS Shield, AWS WAF Google Cloud Armor
Storage Object storage Amazon Simple Storage Service Cloud Storage
  Block storage Amazon Elastic Block Store Persistent Disk
  Reduced-availability storage Amazon S3 Standard-Infrequent Access, Amazon S3 One Zone-Infrequent Access Cloud Storage Nearline and Cloud Storage Coldline
  Archival storage Amazon Glacier Cloud Storage Archive
  File storage Amazon Elastic File System Filestore
  In-memory data store Amazon ElastiCache for Redis Memorystore
Database RDBMS Amazon Relational Database Service, Amazon Aurora Cloud SQLCloud Spanner
  NoSQL: Key-value Amazon DynamoDB FirestoreCloud Bigtable
  NoSQL: Indexed Amazon SimpleDB Firestore
  In-memory data store Amazon ElastiCache for Redis Memorystore
Data analytics Data warehouse Amazon Redshift BigQuery
  Query service Amazon Athena BigQuery
  Messaging Amazon Simple Notification Service, Amazon Simple Queueing Service Pub/Sub
  Batch data processing Amazon Elastic MapReduce, AWS Batch DataprocDataflow
  Stream data processing Amazon Kinesis Dataflow
  Stream data ingest Amazon Kinesis Pub/Sub
  Workflow orchestration Amazon Data Pipeline, AWS Glue Cloud Composer
Management tools Deployment AWS CloudFormation Cloud Deployment Manager
  Cost management AWS Budgets Cost Management
Operations Monitoring Amazon CloudWatch Cloud Monitoring
  Logging Amazon CloudWatch Logs Cloud Logging
  Audit logging AWS CloudTrails Cloud Audit Logs
  Debugging AWS X-Ray Cloud Debugger
  Performance tracing AWS X-Ray Cloud Trace
Security & identity IAM Amazon Identity and Access Management Cloud Identity and Access Management
  Secret management AWS Secrets Manager Secret Manager
  Encrypted keys AWS Key Management Service Cloud Key Management Service
  Resource monitoring AWS Config Cloud Asset Inventory
  Vulnerability scanning Amazon Inspector Web Security Scanner
  Threat detection Amazon GuardDuty Event Threat Detection (beta)
  Microsoft Active Directory AWS Directory Service Managed Service for Microsoft Active Directory
Machine learning Speech Amazon Transcribe Speech-to-Text
  Vision Amazon Rekognition Cloud Vision
  Natural Language Processing Amazon Comprehend Cloud Natural Language API
  Translation Amazon Translate Cloud Translation
  Conversational interface Amazon Lex Dialogflow Enterprise Edition
  Video intelligence Amazon Rekognition Video Video Intelligence API
  Auto-generated models Amazon SageMaker Autopilot AutoML
  Fully managed ML Amazon SageMaker AI Platform
Internet of Things IoT services Amazon IoT Cloud IoT