DEEP DIVE ON DYNAMO DB

SQL vs NoSQL

  • SQL is used for optimization of storage because it uses normalized data structures that deduplicate the data on the disk and also it supports ad-hoc queries to generate materialised views that the application needs. This helps the application not only to select individual values but needs to select data from a table and join another table. For example, you may have a customers table and an orders table and you may want all the orders for the customers so it would be “select from customers inner join orders” whereas in NoSQL you would place customers and orders in a single table/document, it would be faster. SQL scales vertically, you would need to buy a new stack. However NoSQL can scale horizontally/incrementally.
  • SQL is best in OLAP space / online analytics
  • OLTP apps on the other hand, they always process the data exact same way when the user clicks the button, same things happen so you can store data in a normalised way. If you really want performance and scale in OLTP (Online Transaction Processing - transaction oriented applications, typically data entry and retrieval transaction processing) you would need NoSQL.
  • Normally in SQL, let’s say you have products table that would tie to a bunch of others through common relationships, one-to-one, one-to-many and many-to-many relationships. What kind of queries you would run to get a list of all products? 3 different queries, 1 is easy but the other 2 would be pretty complex and think about how much time that CPU is spending and hopping around the disk to build this materialized view. In NoSQL, you don’t want normalized data, you want aggregations! You would want hierarchies as stored items and denormalised view.
  • Dynamo DB is schemaless DB, the only mandatory thing is the primary key. Item is a row in Dynamo DB. Primary key needs to be unique for that item. The other elements are attributes, an item can 0 or more attributes for a given item. Primary key is called partition/hash key. There might be the sort key as an option.

Dynamo DB Core Features and Functionalities

  • It is a fully managed NoSQL
  • Document or key-value database
  • Scales to any workload
  • Fast and consistent
  • Access control - IAM users and permissions on a table and item level
  • Able to adapt event driven programming - where you would use in Serverless
  • Each Read Capacity Unit (RCU) represents 4KB of data, and each Write Capacity Unit (RCU) 1KB of data. Whenever you read/write operation, you read/write on 1 item at least. If you write to a DB, you will always consume 1 WCU as minimum. It’s always rounded up. If you read an item which is 12KB, then it means you consume 3 RCU. If you read 2KB, it’s still 1 RCU.
  • Two types of Read operations;
    • Scans: If you do a scan, by default it consumes every single attribute of item throughout the table. Benefit of the scan is you are not limited by partition key.
    • Queries: are really efficient. Allows you to specify a partition key value and retrieve all the times. You can even filter down based on sort keys and reduce the amount of data consumption on the DB for cost saving. However, filters themselves don’t reduce the consumed capacity. Only the sort key consumes less capacity.
  • Major construct of the Dynamo DB is partitions which is 10GB max in size and 1000 WCU / 3000 RCU. If data grows beyond 10GB, then partitions are added. More than WCU/RCU limit, partitions are added. Partitions cannot be removed. WCU/RCU that you defined is then allocated to each partition and Dynamo DB inputs the partition key into a hash function to decide onto which partition to write the data. The more partition keys vary, the better will be the performance.
  • Global Tables:
  • Global tables provides multi-master cross-region replication. Read/write replication.
  • You create multiple tables within regions and configure replication between all of the replicas. Global table supporting individual DDB tables within different regions.
  • Last writer wins is used for conflict resolution.
  • Reads/Writes can occur to any region and strongly consistent reads ONLY in the same region as writes.
  • Indexes are two types; Global Secondary Indexes (GSI) and Local Secondary Indexes (LSI)
    • LSI: can only be created at the time of table creation. It specifies a different sort key. 5 LSI’s per base table. Indexes are sparse, meaning only items which have a value in the index alternative sort key (LSI) are added to the index which reduces RCU.
    • GSI: offer much better functionality. It creates a different partition and sort keys! Then you will be able to project individual attributes to the index to perform queries. This way, you only consume the capacity of the items retrieved as the result of the query. GSI’s have their own RCU and WCU allocations. Note that GSI’s are always eventually consistent, replication between base and GSI tables is Asynchronous.
    • You can project specific attributes to be queried within indexes, KEYS_ONLY, INCLUDE or ALL
    • Use GSIs as default, LSI only when strong consistency is required.
  • Dynamo DB can do a full backup and restore using S3 including the capacity units and all of the configuration, even the indexes. You can also enable point-in-time recovery of Dynamo DB.
  • Dynamo DB Transactions: it’s a way to create atomicity in Dynamo DB by reserved Capacity: You can actually purchase in order to reduce cost, works in a similar fashion to reserved instances.
  • However you allocate the partition capacity during the configuration of Dynamo DB, it’s evenly distributed across all of the supporting partitions for the table. Therefore, for a given set of items in the table, they can only achieve for the maximum limit of the partition which is itself a component of a table capacity. For example; if you have a table with 3000 WCU and 9000 RCU and let’s say you have 3 partitions; you will have 1000 WCU and 3000 RCU for each partition as a maximum. That’s provisioned capacity.
  • Recently, Dynamo DB introduced the Auto Scaling functionality similar to EC2 Auto Scaling. You are able to define min and max provisioned capacity as well as the target utilisation in percentages %.
  • On-Demand Billing: Takes away all of the performance management and offloads it to the system, pay per use. Pay for the storage and read/writes.
  • Adaptive Capacity: If, at a table level, you are consuming less capacity of RCU/WCU then it’s possible, then individual partitions would be consuming less with a soft limit, partitions can borrow capacity from other ones which have low load. It reduces the hot-partition/key issue.
  • Time to Live: Allows you to expire items in a DB table. You specify an attribute to be the TTL attribute.
  • Global Tables: It is the set of replicated tables across multiple regions. It’s an asyncronous multi-master replication architecture. Streams have to be enable to make global tables work. Once you populate the table, you cannot add an additional region but remove.
  • Dynamo DB Accelerator (DAX)
  • It’s an in-memory cache, similar to elasticache but specific for DAX.
  • Within traditional cache, application would check the cache for data and if it’s not cached, CACHE MISS occurs. Data then is loaded from the database with a separate operation. Thereafter cache is updated with retrieved data. Subsequent queries will load data from the cache as a CACHE HIT.
  • DAX removes this admin overhead. App makes a single call and DAX handles it. Benefit is only one set of API calls within a SDK.
  • DAX is within a VPC and needs to be deployed across different AZs. Cluster mode with primary and read replicas. For example, an application within an EC2 instance needs run DAX SDK.
    • Item cache: holds results of Batch = GetItem based on query/scan parameters. Stores single items, you need to specify partition key.
    • Query cache: Stores collection of items. It also stores the parameters within query/scan.
  • DAX is accessed via an endpoint. Cache hits are returned microseconds. Write-Through is supported, data is written to DDB then DAX.
  • You can scale up and out!
  • DAX is not a public service, it needs to be deployed within a VPC.
  • DAX is not ideal if your app cannot tolerate eventual-consistency.
  • DAX is not ideal if your app is write-heavy.
  • DAX is ideal for read-heavy workloads and low-response time.

Dynamo DB Core components

  • Table: Tables have items and items have attributes. Mandatory attribute is the partition key which will uniquely identify the item. Optional is the sort key. The combination will identify the uniqueness of the data. This is almost like a bucket. In the case of customers and orders, you might imagine the partition key could be the customer ID the sort key could be the Order ID, now all of the orders for a given customer will fall in to the same partition key. You wanna line all those items on the disk and would be able to do a sequential read of this disk. Sort key gives the ability to execute complex large queries against those attributes.

  • Dynamo DB Operations
  • Query: Can return zero item, one or multiple but you need to specify a single value for Partition Key. Even if you retrieve an item with 2.5KB value, it will still consume a capacity unit of 1 RCU. It’s better to minimise the size of an item as a best practice.
  • Scan: If you want to perform more flexible search, you do the scan but least performant. It moves through the table item by item. You consume all of the data, expensive from a capacity unit perspective.

Scalability of Dynamo DB

  • Partition keys are typically hashed to create a randomized range or key space. DynamoDB takes those hash values and line those items up across that key space and as you increase the capacity or the size of the table DynamoDB will chunk that key space and split it across multiple nodes, this is a fundamental principle in NoSQL.
  • There are 2 dimensions
    • Throughput: How much you need? Read/Write capacity to split into partitions. This is provisioned at the table level. Write capacity and Read capacity independently. Write is measured in 1 KB per second and Read in 4 KB per second. Read Capacity Units measure strictly consistent reads. Eventually consistent reads cost 1/2 of consistent reads.
    • Size: Max item size is 400KB
  • Number of partitions is calculated by looking at Capacity (Total RCU / 3000) + (Total WCU / 1000) and by size (Total size / 10GB). DynamoDB will give you the partitions as CEILING(MAX (Capacity, Size)). This is given to evenly distribute the RCUs and WCUs across these partitions
  • Example Scenario: if you need to store 10 items per second. 2.5KB average size per item.
    • This is 10 writes/second. Per second is important.
    • WCU per item will be;
 ROUND UP (ITEM SIZE / 1 KB) (3 WCU) (ITEM
* Then multiple by average number per second = 30 WCU is required.

DynamoDB Consistency Model

  • Data is written to storage nodes that reside within a single AZ. One of them is Leader Storage Node. All of the three have the same data with items. In case of an update, let’s say one of the attribute of an item to be deleted. Leader node will receive the update / write all the time. Leader node will be “consistent” at this point of time.
  • Then Leader starts replicating data to other nodes, finishing within a few milliseconds.
    • Eventual Consistency: This type of read, DDB directs this request to one of the 3 storage nodes. You might get only the portion of the data if you are redirected to the node with not fully synchronised. This is %50 of the strongly consistent read.
    • Strongly Consistency: Connect to the leader node to get the most up-to-date copy of the data.

Throttling

  • If sustained throughput goes beyond provisioned throughput per partition. You might get a burst bucket provided to you for a short period of time. Throttling means you would have a hot key on a particular partition that goes for too long you exhaust that and you are gonna be throttled.
  • Causes
    • Hot keys/partitions
    • Very large items
    • Time series data, getting table bigger and bigger so as the partition and eventually throughput is lower

How to get most out of DynamoDB

  • Evenly distributed data and partitions and time within your DB
  • Data Modelling in NoSQL: It’s all about aggregations. Therefore, it’s important to model in that way
  • Use cases: Social Network, Document Management, Process Control, IT monitoring and Data trees
  • Hierarchical Data - How would you represent? Create a series of items and store these in partitions. As JSON. It is useful in order to save cost in WCU whenever you insert new items

Design Patterns and Best Practices

1) DynamoDB Streams and Lambda

  • Streams provide any highlight/changes from a particular table and items. There are different ways to view the changes;
    • Keys only - only the key attributes of the modified item, just the partition key or partition/sort key.
    • New image - the entire item, as it appears after it was modified.
    • Old image - the entire item, as it appeared before it was modified.
    • New and old image both.
  • Lambda as the best Stored Procedures (A stored procedure is a group of one or more database statements stored in the database’s data dictionary and called from either a remote program, another stored procedure, or the command line.)
  • Implementing triggers: you create reaction to changes in Dynamo DB. E.g. every time a stock level changes, you can implement a trigger (Lambda function), it would react to the items newly added or changes.

2) Real-time voting

  • There is a long list or short list of candidates. One or two options that get a lot of votes. If I am trying to aggregate votes on a table and aggregating votes by candidate ID and what happens is two keys very busy, that’s the definition of a hot-key. The way to deal with these is to take those candidate IDs and add some arbitrary random value within a known range to the end of those like CandidateA_4, CandidateA_5 etc. This way I am spraying the votes across partitions. Later on I execute a scatter gather to merge the results. Shard write-heavy partition keys!!! Trade-off cost for write scalability and consider throughput per partition key. Traditional model is not suitable for this scenario as well where you would have certain part of the country with a lot more voters, so you would get hot-partitioning, adaptive capacity would be more useful in that case!

3) Event Logging

  • Time series data: There are two types;
    • Static Time Series Data: This would be an event monitoring system, ITSM type of app, event streams coming in off from different devices. This is basically dealing with Operational Analytics data over a period of time. Therefore, after that period of time, that data is not so relevant anymore, less queries will be operated. You will create a hot table for that purpose that has high write capacity high read capacity taking all the ingestion. Then you will turn it to a cold or warm table where there will be zero write capacity. Data Lifecycle.
    • Dynamic Time Series Data: Could be session data or user data. Every time the user touches the data, you need to upgrade the time stamp on that data. The data on the table pretty much spreads out all over the place, some data is hot some cold. You have to build a TTL index by creating an attribute on every item whenever you insert the event or the object on the table every item gets an attribute that’s a GSI key and it’s a random range between 0 to n. What you are doing is the right sharding the GSI. What you do is to create a GSI on that GSI key attribute and you arrange it on the timestamp of all your items. What you achieved is basically rotated the entire table into a single index on one attribute and you can execute a range query via scatter gather against that TTL index and get a very selective result set which won’t burn a lot of capacity because you don’t need to scan every item. You can basically quickly retrieve the expired items by AWS Lambda running as a scheduled process.

4) Product Catalog

  • In retail space, or sales activities you got people looking at selective items from your catalog. You would implement a cache where you can put those high-velocity items in that cache and you’ll read from the cache first.

5) Multi-version Concurrency (Transactional NoSQL)

  • OLTP apps use the data always in a hierarchical way. They use also top-level entities, so called as entity-driven workflows, a customer, a product. These are highly relational. Use item partitions to manage transactional workflows

  • Manage versioning across items with metadata
  • Tag attributes to maintain multiple versions
  • Code your app to recognize when updates are in progress
  • Implement app layer error handling and recovery logic

6) Messaging APP

  • You got a messaging app, let’s think about email. Could be a video game or something where I have a messaging app infra but I have got some messages table. And I want views of that table, inbox and outbox. You can create a table “messages” that is hashed on recipient or partition on recipient and sorted on date. If the average message size is 256 KB, and when you go and select 50 items, and let’s assume you apply eventually consistent read which is half the cost and then I will still eat up 1600 RCU (50 * 256KB * (1RCU/4KB) * (1/2)). View of the inbox, you are not interested in the view but only the metadata includes subject etc. Take a relational book and vertically partition this table as an old strategy, then you will create another GSI with only metadata

7) Multiplayer Online Gaming

  • Table with user invites to games. Bunch of games done, progress and pending. What you want is to retrieve the users in pending within a particular timeframe. You would apply range queries against sort key attributes!!! You would apply a filter on condition additionally that only knocks out items after they have been read
  • Sparse indexes should be used in DynamoDB to minimize database size (and cost). Look up “local secondary instances” in the DynamoDB guide. RDS is not suitable for this, it would be almost impossible to archive and the database would just keep growing. With archiving DynamoDB would be a much more cost effective solution. DynamoDB allows for higher write throughput at lower cost than traditional relational databases.
  • Suppose you have a DDB table with e-commerce entities such as users, orders, items etc. When you want to filter only ‘open orders’ while you have different composite keys (partition and sort keys) for different purposes, how would you handle that? You would add an extra attribute next to any order with tag ‘placed’ called ‘placedID’ just to indicate it exists. Then you would create a secondary index on that placedID. Then you only get on your Global Secondary Index table only the orders with ‘PLACED’ status, then you can use SCAN operation to query a specific item.

ElastiCache Redis vs Memcached

  • complex data objects vs simple key value storage
  • persistent vs non persistent, pure caching
  • automatic failover with Multi-AZ vs Multi-AZ not supported
  • scaling using Read Replicas vs using multiple nodes (not good for backups/restore)
  • backup & restore supported vs not supported
  • Memcached supports multi-threaded operations
  • ElastiCache supports backup and restore

DynamoDB and SQS - R/W throughput optimisation

  • How can I reduce the read/write throughput especially in a spiky workloads?
  • Use SQS as a fronting agent to communicate with DynamoDB. If you are not worried about the fastest latency to get to DynamoDB, ok with a little bit of asynchronous writes, instead of writing directly to DynamoDB, you would have a SQS queue with a draining application that takes from SQS and writes into DynamoDB. With this, you can consistently have a smaller pipe going into Dynamo DB and have some level of backup within SQS and not worried about the scaling DynamoDB at those peak moments. Huge cost savings if your app can handle that latency…

DynamoDB Global Tables

  • The idea of low latency global table is that a single write automatically handled by multi-regions, redundant, globally dispersed user-base.

Amazon Neptune

  • It’s a graph database model, relatively new and fairly nich use-case. Designed for dynamic relationships, such as social media app users and how these are related to each other. Also, knowledge graphs, fraud detection, recommendation engines etc.
  • Great for fraud prevention as well to build patterns of data looking anomalies.
  • Recommendation Engines as well to figure out relationships of people.
  • Network and IT operations to define dependencies.
  • Biology and other life sciences as well to identify relationships between drug types etc.

Amazon Quantum Ledger Database (QLDB)

  • This is a fully immutable database using cryptographically verifiable ledger technology to track every change to a data in a way which cannot be manipulated. For example; when you want to make a change to a bank account, you want to be able to know when and how that data had been adjusted.
  • QLDB uses a document document storage model, allowing data with complex, nested structures to be stored and queried.
  • QLDB is an ACID based DB with strong transactional consistency. It’s a fully managed DBaaS service without any provisioning.
  • Changes in QLDB are cryptographically verifiable using an “hash-chained” journal. When data is initially added, a ‘HASH” is created of that data. When a new version is added, a new hash is created of current data plus the old hash - this forms a chain. It makes tampering or changes immediately detectable and the more data added, the harder it would be to ‘fake’ this chain.
  • Industries
    • Banking Transactions: you need to know any irregularities
    • Manufacturing: authenticity of the data through the supply chain
    • Healthcare: X-Ray, patients etc. next iterations

Amazon Document DB with MongoDB Compatibility

  • See the AWS website for more detail

STATE SHARING
Problem

  • State is stored on my server so scaling horizontally does not work that well Solution
  • In order to scale horizontally and not have a user locked into a server, I need to move state off of my server into a KVS. Moving session data into DynamoDB or ElastiCache allows my application to be stateless. This lets you use a scale out pattern without having to worry about inheritance or loss of state information