Working with cloud databases often feels faster when you can skip the browser and get straight to the point. AWS DynamoDB is a fully managed NoSQL service known for its simplicity and scalability. Managing it through the web console can sometimes feel clunky, especially for repetitive tasks. That’s where the AWS Command Line Interface (CLI) comes in handy.
With just a few commands, you can create, update, monitor, and delete DynamoDB tables directly from your terminal. It’s scriptable, reliable, and easy to integrate into automated workflows. This guide provides clear, practical steps to create and manage DynamoDB tables using AWS CLI, without unnecessary complexity.
Before working with DynamoDB from the command line, you’ll need to set up the AWS CLI on your machine. Download the installer from AWS and follow the simple steps to get started. To connect it to your account, run:
aws configure
You’ll be asked for your access key, secret key, default region, and preferred output format. This configuration tells the CLI which account and region to work with. Ensure the IAM user or role you’re using has permission to work with DynamoDB. Once configured, you’re ready to create and manage tables easily.
After setting up AWS CLI, creating a DynamoDB table is straightforward. The command is short, but understanding each part helps avoid mistakes and design your table properly. You’ll need to decide on the table name, key schema, attribute types, and capacity options — either on-demand or provisioned throughput.
For example, to create a Users
table where UserId
is the unique identifier, use:
aws dynamodb create-table \
--table-name Users \
--attribute-definitions AttributeName=UserId,AttributeType=S \
--key-schema AttributeName=UserId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
This setup defines UserId
as a string and uses it as the partition key. The PAY_PER_REQUEST
billing mode means you’re charged based on usage, without pre-allocating read/write units. To manage capacity yourself, switch to provisioned throughput by adding:
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
For complex scenarios, define a composite primary key and add secondary indexes. For instance, create an Orders
table with OrderId
as the partition key, CustomerId
as the sort key, and a global secondary index:
aws dynamodb create-table \
--table-name Orders \
--attribute-definitions \
AttributeName=OrderId,AttributeType=S \
AttributeName=CustomerId,AttributeType=S \
--key-schema \
AttributeName=OrderId,KeyType=HASH \
AttributeName=CustomerId,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST \
--global-secondary-indexes \
'[{
"IndexName":"CustomerIndex",
"KeySchema":[{"AttributeName":"CustomerId","KeyType":"HASH"}],
"Projection":{"ProjectionType":"ALL"}
}]'
This approach provides flexibility while keeping your data organized for efficient access.
After running the create-table
command, the CLI returns a JSON output describing the table’s properties. The table creation process is asynchronous, starting with a status of CREATING
. Check its status with:
aws dynamodb describe-table --table-name Users
This output includes the current status. Once it changes to ACTIVE
, the table is ready for use. List all your tables using:
aws dynamodb list-tables
Sometimes, you need to make changes to an existing table. While primary keys cannot be changed directly, you can adjust provisioned throughput (if not using on-demand mode), add or delete global secondary indexes, and modify table capacity. To increase write capacity:
aws dynamodb update-table \
--table-name Users \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=10
To add a global secondary index:
aws dynamodb update-table \
--table-name Users \
--attribute-definitions AttributeName=Email,AttributeType=S \
--global-secondary-index-updates \
'[{
"Create": {
"IndexName": "EmailIndex",
"KeySchema": [
{"AttributeName":"Email","KeyType":"HASH"}
],
"Projection": {
"ProjectionType":"ALL"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
}
}
}]'
This adds an index called EmailIndex
for efficient email lookups. Monitor its status using the describe-table
command.
When you no longer need a table, remove it easily with:
aws dynamodb delete-table --table-name Users
This deletes the table and all its data. Be cautious, as the data cannot be recovered after deletion.
Regular backups are a good practice. Create on-demand backups using:
aws dynamodb create-backup --table-name Users --backup-name UsersBackup
The output provides a BackupArn
, which you can use to restore the backup later:
aws dynamodb restore-table-from-backup \
--target-table-name UsersRestored \
--backup-arn <BackupArn>
This creates a new table from the backup. Use point-in-time recovery, if enabled, to restore the table to any moment within the last 35 days.
Monitoring is crucial for managing DynamoDB tables. The CLI allows you to enable or disable point-in-time recovery, check the table’s status, and view metrics via CloudWatch. Enable point-in-time recovery with:
aws dynamodb update-continuous-backups \
--table-name Users \
--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true
Use CloudWatch to monitor read and write throughput, throttled requests, and other key metrics to keep tables healthy and performant.
Managing DynamoDB tables through the AWS CLI offers flexibility and efficiency for developers and administrators who prefer working in a terminal. From creating simple tables to defining complex schemas with indexes, the CLI provides clear commands that can be integrated into scripts and workflows. Backing up and restoring data, adjusting throughput, and monitoring tables become much simpler with these commands. This hands-on approach helps you stay close to your infrastructure and gain confidence in how your data is organized and maintained.
For more detailed information on AWS CLI commands for DynamoDB, check out the official AWS CLI documentation.
AWS' generative AI platform combines scalability, integration, and security to solve business challenges across industries.
AWS SageMaker suite revolutionizes data analytics and AI workflows with integrated tools for scalable ML and real-time insights.
Discover how NLP can save time and money, enhance customer service, and optimize content creation for businesses.
Discover how AWS's SageMaker Unified Studio creates a seamless environment that connects analytics and AI development processes for efficient data management, governance, and generative AI workflows.
SageMaker Unified Studio AWS creates one unified environment connecting analytics and AI development processes for easy data management, data governance, and generative AI workflow operations.
Accelerate BERT inference using Hugging Face Transformers and AWS Inferentia to boost NLP model performance, reduce latency, and lower infrastructure costs
Pegasystems adds advanced AI in CRM systems and BPM automation tools for AI-powered customer engagement and faster workflows.
Get to know about the AWS Generative AI training that gives executives the tools they need to drive strategy, lead innovation, and influence their company direction.
Choosing between AWS and Azure goes beyond features. Learn how their ecosystems, pricing, and real-world use cases differ to find the right fit for your team’s cloud needs.
AWS unveils foundation model tools for Bedrock, accelerating AI development with generative AI content creation and scalability.
Discover how the integration of IoT and machine learning drives predictive analytics, real-time data insights, optimized operations, and cost savings.
Understand ChatGPT-4 Vision’s image and video capabilities, including how it handles image recognition, video frame analysis, and visual data interpretation in real-world applications
Explore what data warehousing is and how it helps organizations store and analyze information efficiently. Understand the role of a central repository in streamlining decisions.
Discover how predictive analytics works through its six practical steps, from defining objectives to deploying a predictive model. This guide breaks down the process to help you understand how data turns into meaningful predictions.
Explore the most common Python coding interview questions on DataFrame and zip() with clear explanations. Prepare for your next interview with these practical and easy-to-understand examples.
How to deploy a machine learning model on AWS EC2 with this clear, step-by-step guide. Set up your environment, configure your server, and serve your model securely and reliably.
How Whale Safe is mitigating whale strikes by providing real-time data to ships, helping protect marine life and improve whale conservation efforts.
How MLOps is different from DevOps in practice. Learn how data, models, and workflows create a distinct approach to deploying machine learning systems effectively.
Discover Teradata's architecture, key features, and real-world applications. Learn why Teradata is still a reliable choice for large-scale data management and analytics.
How to classify images from the CIFAR-10 dataset using a CNN. This clear guide explains the process, from building and training the model to improving and deploying it effectively.
Learn about the BERT architecture explained for beginners in clear terms. Understand how it works, from tokens and layers to pretraining and fine-tuning, and why it remains so widely used in natural language processing.
Explore DAX in Power BI to understand its significance and how to leverage it for effective data analysis. Learn about its benefits and the steps to apply Power BI DAX functions.
Explore how to effectively interact with remote databases using PostgreSQL and DBAPIs. Learn about connection setup, query handling, security, and performance best practices for a seamless experience.
Explore how different types of interaction influence reinforcement learning techniques, shaping agents' learning through experience and feedback.