ElastiCache : Memcached or Redis

TTL기반 eviction 가능

 

 

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.Components.html

 

ElastiCache for Redis components and features - Amazon ElastiCache for Redis

Access control based on IP ranges is currently not enabled for clusters. All clients to a cluster must be within the Amazon EC2 network, and authorized by using security groups as described previously.

docs.aws.amazon.com

 

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Strategies.html

 

Caching strategies - Amazon ElastiCache for Redis

Caching strategies In the following topic, you can find strategies for populating and maintaining your cache. What strategies to implement for populating and maintaining your cache depend upon what data you cache and the access patterns to that data. For e

docs.aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html

 

Caching strategies - Amazon ElastiCache

Caching strategies In the following topic, you can find strategies for populating and maintaining your cache. What strategies to implement for populating and maintaining your cache depend upon what data you cache and the access patterns to that data. For e

docs.aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Strategies.html#Strategies.WithTTL

 

Caching strategies - Amazon ElastiCache for Redis

Caching strategies In the following topic, you can find strategies for populating and maintaining your cache. What strategies to implement for populating and maintaining your cache depend upon what data you cache and the access patterns to that data. For e

docs.aws.amazon.com

https://aws.amazon.com/elasticache/redis-vs-memcached/

 

Redis vs. Memcached | AWS

Redis and Memcached are popular, open-source, in-memory data stores. Although they are both easy to use and offer high performance, there are important differences to consider when choosing an engine. Memcached is designed for simplicity while Redis offer

aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/best-practices-online-resharding.html

 

Best practices: Online cluster resizing - Amazon ElastiCache for Redis

Best practices: Online cluster resizing Resharding involves adding and removing shards or nodes to your cluster and redistributing key spaces. As a result, multiple things have an impact on the resharding operation, such as the load on the cluster, memory

docs.aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/BestPractices.BGSAVE.html

 

Ensuring that you have enough memory to create a Redis snapshot - Amazon ElastiCache for Redis

Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better.

docs.aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Scaling.html

 

Scaling ElastiCache for Redis clusters - Amazon ElastiCache for Redis

Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better.

docs.aws.amazon.com

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

 

Data security in Amazon ElastiCache - Amazon ElastiCache for Redis

Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better.

docs.aws.amazon.com

\

'AWS > AWS Database' 카테고리의 다른 글

AWS DynamoDB 데이터베이스  (0) 2022.04.13
AWS DB  접속 명령어  (0) 2022.04.13
AWS DocumentDB  (0) 2022.04.13
AWS Aurora 데이터베이스  (0) 2022.04.13
db test python  (0) 2022.04.13

table 만 생성하면 됨 : 서비스 레벨 : 글로벌

partition key : unique 해야함, unique, not null

ACID 트랜잭션 지원

 

파티션

기본키 : 복합 기본키 구성 가능 : 파티션 키 + 정렬 키

기본키는 최대한 균등하게 분산 

Hashkey방식

 

 Capacity : 

RCU : Read Capacity Unit : 초당 4KB, 몇번을 읽느냐 

WCU : Write Capacity Unit : 초당 1KB, 몇번을 쓰느냐

 

Provisioned 방식 : RCU, WCU를 미리 할당 하는 방식, RCU, WCU 1000 : 다 사용하면, 에러남

Ondemand 방식 : RCU, WCU 사용 만큼 비용

Adaptive Capacity : 파티션 별로 RCU, WCU가 적용되는데, 다른 덜 사용한 파티션의 capacity를 가져다 사용

WCU 400, 4개 partition이면, 1 partition당 100이 max인데 1,2,3이 50사용하고, 4가 100을 넘기면 이전에는 에러 났는데

1,2,3의 미사용 capacity를 사용가능

Amazon DynamoDB Accelerator

 

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.html

 

Getting Started with DynamoDB and AWS SDKs - Amazon DynamoDB

Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better.

docs.aws.amazon.com

https://docs.aws.amazon.com/cli/latest/reference/dynamodb/scan.html

 

scan — AWS CLI 1.22.95 Command Reference

Note: You are viewing the documentation for an older major version of the AWS CLI (version 1). AWS CLI version 2, the latest major version of AWS CLI, is now stable and recommended for general use. To view this page for the AWS CLI version 2, click here. F

docs.aws.amazon.com

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.html

 

Using Expressions in DynamoDB - Amazon DynamoDB

Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better.

docs.aws.amazon.com

https://aws.amazon.com/ko/dynamodb/dax/

 

Amazon DynamoDB Accelerator(DAX)

DynamoDB와 마찬가지로 DAX는 완전관리형입니다. 따라서 하드웨어나 소프트웨어 프로비저닝, 설정 및 구성, 소프트웨어 패치, 분산 캐시 클러스터 운영 또는 확장 시 여러 인스턴스에 데이터 복제

aws.amazon.com

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html

 

In-Memory Acceleration with DynamoDB Accelerator (DAX) - Amazon DynamoDB

In-Memory Acceleration with DynamoDB Accelerator (DAX) Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require respons

docs.aws.amazon.com

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html

 

Using On-Demand Backup and Restore for DynamoDB - Amazon DynamoDB

Using On-Demand Backup and Restore for DynamoDB You can use the DynamoDB on-demand backup capability to create full backups of your tables for long-term retention, and archiving for regulatory compliance needs. You can back up and restore your table data a

docs.aws.amazon.com

 

'AWS > AWS Database' 카테고리의 다른 글

AWS ElastiCache 데이터베이스  (0) 2022.04.14
AWS DB  접속 명령어  (0) 2022.04.13
AWS DocumentDB  (0) 2022.04.13
AWS Aurora 데이터베이스  (0) 2022.04.13
db test python  (0) 2022.04.13
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-123-39JHNq --version-stage AWSCURRENT



sh-4.2$ aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-123-39JHNq --version-stage AWSCURRENT
{
    "Name": "mydbsecret-123",
    "VersionId": "ee1750fe-bc06-4b08-a507-429e370aba0b",
    "SecretString": "{\"username\":\"admin\",\"password\":\"Pa33w0rd!\",\"engine\":\"mysql\",\"host\":\"rdslabdb.cin1rcsx20ld.us-west-2.rds.amazonaws.com\",\"port\":3306,\"dbname\":\"MyRDSLab\",\"dbInstanceIdentifier\":\"rdslabdb\"}",
    "VersionStages": [
        "AWSCURRENT"
    ],
    "CreatedDate": 1649816414.198,
    "ARN": "arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-123-39JHNq"
}
sh-4.2$


secret=$(aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-123-39JHNq | jq .SecretString | jq fromjson)
user=$(echo $secret | jq -r .username)
password=$(echo $secret | jq -r .password)
endpoint=$(echo $secret | jq -r .host)
port=$(echo $secret | jq -r .port)



mysql -h $endpoint -u $user -P $port -p$password rdslabdb


secret=$(aws secretsmanager get-secret-value --secret-id 
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-123-39JHNq | jq .SecretString | jq fromjson)
user=$(echo $secret | jq -r .username)
password=$(echo $secret | jq -r .password)
endpoint=$(echo $secret | jq -r .host)
port=$(echo $secret | jq -r .port)


aws secretsmanager list-secret-version-ids --secret-id mydbsecret-456



{
    "Name": "mydbsecret-456",
    "ARN": "arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-456-BXTNeQ",
    "Versions": [
        {
            "VersionId": "bb8a8e83-e506-4123-8505-3172412da20c",
            "VersionStages": [
                "AWSCURRENT"
            ],
            "LastAccessedDate": 1649808000.0,
            "CreatedDate": 1649817582.691
        }
    ]
}


arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-456-BXTNeQ



secret=$(aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:642614214013:secret:mydbsecret-456-BXTNeQ | jq .SecretString | jq fromjson)
user=$(echo $secret | jq -r .username)
password=$(echo $secret | jq -r .password)
endpoint=$(echo $secret | jq -r .host)
port=$(echo $secret | jq -r .port)


mysql -h $endpoint --ssl-ca=rds-combined-ca-bundle.pem --ssl-verify-server-cert -u $user -P $port -p$password mydb



ySQL [mydb]> STATUS
--------------
mysql  Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:          35
Current database:       mydb
Current user:           admin@10.0.1.218
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server:                 MySQL
Server version:         5.7.22-log Source distribution
Protocol version:       10
Connection:             mydb.cin1rcsx20ld.us-west-2.rds.amazonaws.com via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:               3306
Uptime:                 2 hours 1 min 13 sec

Threads: 2  Questions: 16909  Slow queries: 0  Opens: 371  Flush tables: 1  Open tables: 57  Queries per second avg: 2.324





mysql -u dbadmin -p -h qdd183c5b4qq1v.cyl8pclvpbyz.us-west-2.rds.amazonaws.com


  nnodb_buffer_pool_read_requests      | 421321604 
 Innodb_buffer_pool_read_requests      | 452173597  |
  nnodb_buffer_pool_read_requests      | 483025722 
 Innodb_buffer_pool_read_requests      | 483029657  


-------------



mysql -u dbadmin -p -h qd11he8s6l6lx71.cyl8pclvpbyz.us-west-2.rds.amazonaws.com





mongoimport --ssl --host mydocdb.cluster-cfsh5bndxhms.us-west-2.docdb.amazonaws.com:27017 \
 --sslCAFile rds-combined-ca-bundle.pem \
 --username docdbadmin \
 --password Pa33w0rd! \
 --collection cast_1990 --db cast \
 --file /tmp/cast_1990.json --jsonArray

mongo --ssl --host mydocdb.cluster-cfsh5bndxhms.us-west-2.docdb.amazonaws.com:27017 \
--sslCAFile rds-combined-ca-bundle.pem \
--username docdbadmin \
--password Pa33w0rd!


mydocdb


dbinstanceb-ozxtfkgbrskk : primary
dbinstancea-ml2krds2zclb : replica


aws docdb describe-db-clusters \
    --db-cluster-identifier mydocdb  \
    --query 'DBClusters[*].[DBClusterIdentifier,Status]'


aws docdb describe-db-instances \
--db-instance-identifier dbinstanceb-ozxtfkgbrskk  \
--query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceStatus]'
aws dynamodb scan --table-name Cast \
  --filter-expression "genres = :a" \
  --projection-expression "#YR, #TT" \
  --expression-attribute-names file://expression-attribute-names.json \
  --expression-attribute-values file://expression-attribute-values.json

  aws dynamodb get-item --table-name Cast \
    --key '{"year":{"N": "1999"},"title":{"S":"18 Shades of Dust"}}' \
    --expression-attribute-name '{"#c": "cast"}' \
    --projection-expression "titleId, title, runtimeMinutes, genres, #c"

'AWS > AWS Database' 카테고리의 다른 글

AWS ElastiCache 데이터베이스  (0) 2022.04.14
AWS DynamoDB 데이터베이스  (0) 2022.04.13
AWS DocumentDB  (0) 2022.04.13
AWS Aurora 데이터베이스  (0) 2022.04.13
db test python  (0) 2022.04.13
Mysql DocumentDB
Table Collection
Row Document
Column Field
Joins Embedding, Linking

'AWS > AWS Database' 카테고리의 다른 글

AWS DynamoDB 데이터베이스  (0) 2022.04.13
AWS DB  접속 명령어  (0) 2022.04.13
AWS Aurora 데이터베이스  (0) 2022.04.13
db test python  (0) 2022.04.13
AWS RDS TSL/SSL connection  (0) 2022.04.13
Read Replica : 15개까지
Storage : up to 128T

 
  Amazon Aurora Amazon RDS
복제본수 up to 15 up to 5
복제유형 Async MilliSecond Async Second
복제성능 Storage Layer Instance Layer
장애조치 Replica Standby
스토리지 up to 128T up to 64T
자동장애조치 Yes, fail over to Replica Yes, fail over to Standby
스토리지엔진 Mysql InnoDB, PostgreSQL Mysql InnoDB, MyISAM, PostgreSQL
     
     

'AWS > AWS Database' 카테고리의 다른 글

AWS DB  접속 명령어  (0) 2022.04.13
AWS DocumentDB  (0) 2022.04.13
db test python  (0) 2022.04.13
AWS RDS TSL/SSL connection  (0) 2022.04.13
AWS DB 실습 동영상  (0) 2022.04.13
h-4.2$ cat failover_test.py
#!/usr/bin/env python3

import argparse
import logging
import pprint
import sys
import time
import traceback

from db_test_meter.database import Database
from db_test_meter.test_run import TestRun
from db_test_meter.util import init_logger, collect_user_input

parser = argparse.ArgumentParser('This will gather metrics of a failover event')
parser.add_argument('--test_run_id', metavar='<test run id>', type=str, nargs='?', required=True,
                    help='a unique identifier for this test run')
parser.add_argument('--loop_time', metavar='<seconds>', type=float, nargs='?', default='.5',
                    help='sleep is used to insure this minimum loop time in sec. Can be decimal (defaults to .5')
parser.add_argument('--debug', action='store_true')
args = parser.parse_args()
test_run_id = args.test_run_id
loop_time = args.loop_time
if loop_time <= 0:
    print('Loop time must be >= 0, exiting...')
    exit(1)
init_logger(debug=args.debug)
log = logging.getLogger()

db_connection_metadata = collect_user_input()
db = Database(db_connection_metadata)
test_runner = TestRun(db)

if not test_runner.test_db_connection():
    log.fatal('Initial db connection failed.  Check you connection setup and try again. Exiting...')
    exit(1)

pre_failure_db_node_hostname = test_runner.get_db_node_hostname()
print(f'Test starting, initial Db node hostname: {pre_failure_db_node_hostname}')
post_failure_db_node_hostname = None

try:
    while True:
        loop_start_time = time.time()
        test_runner.ensure_minumum_loop_time(loop_time, loop_start_time, test_runner.prev_loop_end_time)
        if test_runner.db_node_heartbeat(test_run_id):
            if test_runner.recovery_detected():
                test_runner.failure_condition_end_time = time.time()
                post_failure_db_node_hostname = test_runner.get_db_node_hostname()
                test_runner.prev_loop_end_time = time.time()
                break
        test_runner.prev_loop_end_time = time.time()
except Exception as e:
    print(f'There was an unexpected exception: {e}')
    print("-" * 60)
    traceback.print_exc(file=sys.stdout)
    print("-" * 60)
    exit(1)
finally:
    test_runner.shutdown()


pp = pprint.PrettyPrinter(indent=2)
print('\n========================================')
print(f'Total Db connection attempts: {test_runner.success_connect_count + test_runner.failed_connect_count}')
print(f'Successful Db connections: {test_runner.success_connect_count}')
print(f'Failed Db connections: {test_runner.failed_connect_count}')
print(f'failure_start_time: {time.ctime(test_runner.failure_condition_start_time)}')
print(f'failure_end_time: {time.ctime(test_runner.failure_condition_end_time)}')
duration = int(test_runner.failure_condition_end_time - test_runner.failure_condition_start_time)
print(f'failure condition duration: {duration} seconds')
print(f'Last inserted sync record id on initial primary db node: {test_runner.last_inserted_heartbeat_index}')
print(f'Pre-failure Db node hostname: {pre_failure_db_node_hostname}')
print(f'Post-failure Db node hostname: {post_failure_db_node_hostname}')
print(f'Newest 5 sync records in current primary db node:')
pp.pprint(test_runner.get_last_sync_records(test_run_id, 5))
sh-4.2$
 
h-4.2$ cat create_failover_sync_db.py
#!/usr/bin/env python3

import argparse
import logging

from db_test_meter.database import Database
from db_test_meter.util import init_logger, collect_user_input, AppConfig


def create_db(db: Database) -> None:
    """
    Utility to create the db and table for the sync check
    :param db:
    :return:
    """
    try:
        log.debug(f'creating database {AppConfig.TEST_DB_NAME}')
        db.run_query(f"DROP DATABASE IF EXISTS {AppConfig.TEST_DB_NAME}")
        db.run_query(f"CREATE DATABASE IF NOT EXISTS {AppConfig.TEST_DB_NAME}")
        log.debug(f'creating table {AppConfig.TEST_DB_TABLE}')
        db.run_query(
            f"CREATE TABLE {AppConfig.TEST_DB_NAME}.{AppConfig.TEST_DB_TABLE} (`test_run_id` varchar(50) NOT NULL, `index_id` int(10) unsigned NOT NULL, `created` int(8) NOT NULL)")
        print(f'Database {AppConfig.TEST_DB_NAME} created')
        print(f'Table {AppConfig.TEST_DB_NAME}.{AppConfig.TEST_DB_TABLE} created')
    except Exception as e:
        print(f'There was an error: {e}')


parser = argparse.ArgumentParser(
    'simple utility to create the db and table used by failover_test.py. Usage: ./create_failover_sync_db.py')
parser.add_argument('--debug', action='store_true')
init_logger(debug=parser.parse_args().debug)
log = logging.getLogger()

print('This will destroy and recreate sync database and tracking table')
if (input("enter y to continue, n to exit [n]: ") or 'n').lower() == 'y':
    db_connection_metadata = collect_user_input()
    db = Database(db_connection_metadata)
    create_db(db)
else:
    print('exiting...')
sh-4.2$

 

 

h-4.2$ cat database.py
import sys
import pymysql
import logging


class Database:
    """Database connection class."""

    def __init__(self, db_connection_metadata):
        self.host = db_connection_metadata['db_host']
        self.port = int(db_connection_metadata['db_port'])
        self.username = db_connection_metadata['db_user']
        self.password = db_connection_metadata['db_password']
        self.charset = 'utf8mb4'
        self.cursorclass = pymysql.cursors.DictCursor
        self.read_timeout = db_connection_metadata['db_interact_timeout']  # 1 sec
        self.write_timeout = db_connection_metadata['db_interact_timeout']  # 1 sec
        self.connect_timeout = db_connection_metadata['db_interact_timeout']  # 1 sec
        self.ssl_metadata = db_connection_metadata['ssl_metadata']

        self.conn = None

    def open_connection(self):

        if self.conn is None:
            logging.debug('opening db connection')
            self.conn = pymysql.connect(
                host=self.host,
                port=self.port,
                user=self.username,
                password=self.password,
                charset='utf8mb4',
                cursorclass=pymysql.cursors.DictCursor,
                read_timeout=self.read_timeout,  # 1 sec
                write_timeout=self.write_timeout,  # 1 sec
                connect_timeout=self.connect_timeout,  # 1 sec
                ssl=self.ssl_metadata
            )
            logging.debug('Connection opened successfully.')

    def run_query(self, query, query_params=None):
        try:
            cur = None
            self.open_connection()
            with self.conn.cursor() as cur:
                if 'SELECT' in query or 'SHOW' in query:
                    records = []
                    logging.debug(f'executing query: {query}  params:{query_params}')
                    cur.execute(query, query_params)
                    result = cur.fetchall()
                    for row in result:
                        records.append(row)
                    logging.debug('closing db connection')
                    cur.close()
                    return records
                else:
                    logging.debug(f'executing query: {query}  params:{query_params}')
                    cur.execute(query, query_params)
                    self.conn.commit()
                    affected = f"{cur.rowcount} rows affected."
                    logging.debug('closing db connection')
                    cur.close()
                    return affected
        except pymysql.MySQLError as e:
            print(e)
            raise Exception('Db Connection failed')
#        finally:
 #           if cur:
 #               cur.close()
        except pymysql.OperationalError as e:
            print(e)
            raise Exception('Query failed to write')

    def close_connection(self):
        if self.conn:
            self.conn.close()
            self.conn = None
            logging.info('Database connection closed.')sh-4.2$
h-4.2$ cat test_run.py
import time

from db_test_meter.database import Database
from db_test_meter.util import log, AppConfig


class TestRun:

    def __init__(self, db: Database):
        self.db = db
        self.success_connect_count: int = 0
        self.failed_connect_count: int = 0
        self.current_phase: str = 'INIT'
        self.prev_loop_end_time: float = 0
        self.failure_condition_start_time: float = 0
        self.failure_condition_end_time: float = 0
        self.heartbeat_index = 0
        self.last_inserted_heartbeat_index = 0

    def test_db_connection(self) -> bool:
        try:
            self.db.run_query('SELECT version()')
            print(f'Connection succeeded at {time.ctime()}')
            self.success_connect_count += 1
            return True
        except Exception as e:
            print(f'There was an error: {e}')
            if self.current_phase == 'INIT':
                self.failure_condition_start_time = time.time()
            self.current_phase = 'FAILING'
            self.failed_connect_count += 1
            if self.failed_connect_count <= 600:  # limit error start to ~ 10 minutes
                return False
            else:
                log.fatal('Maximum Db connection failures of 600 occurred, exiting...')
                exit(1)

    def get_db_node_hostname(self):
        query = "SHOW variables LIKE 'hostname'"
        result = self.db.run_query(query)
        if result and 'Value' in result[0]:
            db_node_hostname = result[0]["Value"]
            log.debug(f'Db node Hostname: {db_node_hostname}')
        else:
            raise Exception(f'Unable to retrieve db node hostname with query: {query}')
        return db_node_hostname

    def db_node_heartbeat(self, test_run_id: str) -> bool:
        try:
            if self.current_phase == 'FAILING':
                return self.test_db_connection()
            else:
                self.heartbeat_index += 1
                self.db.run_query(
                    f"INSERT INTO {AppConfig.TEST_DB_NAME}.{AppConfig.TEST_DB_TABLE} SET test_run_id=%s, index_id=%s, created=UNIX_TIMESTAMP()",
                    (test_run_id, self.heartbeat_index,))
                print(f'Insert succeeded at {time.ctime()} test_run_id: {test_run_id}, index_id:{self.heartbeat_index}')
                self.last_inserted_heartbeat_index = self.heartbeat_index
                self.success_connect_count += 1
            return True
        except Exception as e:
            print(f'There was an error: {e}')
            if self.current_phase == 'INIT':
                self.failure_condition_start_time = time.time()
                time.sleep(120)
            self.current_phase = 'FAILING'
            # we've failed so kill this connection
            self.db.close_connection()
            self.failed_connect_count += 1
            if self.failed_connect_count <= 600:  # limit error start to ~ 10 minutes
                return False
            else:
                log.fatal('Maximum Db connection failures of 600 occurred, exiting...')
                exit(1)

    def recovery_detected(self) -> bool:
        if self.current_phase == 'FAILING':
            # we've recovered
            log.debug('moving from phase FAILING -> RECOVERED')
            self.current_phase = 'RECOVERED'
            return True
        return False

    def ensure_minumum_loop_time(self, loop_time_min_in_sec: float, loop_start_time: float, prev_loop_end_time: float):

        if prev_loop_end_time != 0:
            log.debug(f'this loop start time: {loop_start_time}')
            log.debug(f'prev loop start end time: {prev_loop_end_time}')
            last_loop_runtime = loop_start_time - prev_loop_end_time
            log.debug(f'last loop runtime: {last_loop_runtime}')
            if last_loop_runtime < loop_time_min_in_sec:
                sleep_time = loop_time_min_in_sec - last_loop_runtime
                log.debug(f'sleeping {sleep_time}')
                time.sleep(sleep_time)

    def get_last_sync_records(self, test_run_id: str, number_of_records: int) -> dict:
        result = self.db.run_query(
            f'SELECT * FROM {AppConfig.TEST_DB_NAME}.{AppConfig.TEST_DB_TABLE} WHERE test_run_id = %s ORDER BY `index_id` DESC LIMIT %s',
            (test_run_id, number_of_records))
        return result

    def shutdown(self):
        self.db.close_connection()
sh-4.2$
sh-4.2$ cat util.py
import getpass
import logging
import os
import sys
import json

import boto3

client  = boto3.client('secretsmanager')


log = logging.getLogger()


class AppConfig:
    TEST_DB_NAME = 'db_test_meter'
    TEST_DB_TABLE = 'db_sync'


def init_logger(debug=False) -> None:
    log_level = logging.DEBUG if debug else logging.WARNING
    logging.getLogger().setLevel(log_level)
    handler = logging.StreamHandler(sys.stdout)
    handler.setLevel(log_level)
    log.addHandler(handler)


def collect_user_input() -> dict:
    user_input = {'ssl_metadata': None}
    user_input['db_interact_timeout'] = 1  # 1 sec
    responce  = client.list_secrets(MaxResults=1)
    user_input['secret_arn'] = responce['SecretList'][0]['ARN']
    user_input['secret_versionId'] = responce['SecretList'][0]['SecretVersionsToStages']
    responce = client.get_secret_value(SecretId=user_input['secret_arn'])
    secretString = json.loads(responce['SecretString'])
    user_input['db_user'] = secretString['username']
    user_input['db_password'] = secretString['password']
    user_input['db_host'] = secretString['host']
    user_input['db_port'] = secretString['port']
    using_ssl = input('Connecting over SSL (y/n) [y]: ').strip().lower() or 'y'
    if using_ssl == 'y':
        path_to_ssl_cert = input('path to ssl cert [./rds-combined-ca-bundle.pem]: ') or './rds-combined-ca-bundle.pem'
        if not os.path.exists(os.path.abspath(path_to_ssl_cert)):
            log.fatal(f'SSL cert not found at: {path_to_ssl_cert}')
            exit(1)
        user_input['ssl_metadata'] = {'ssl': {'ca': path_to_ssl_cert}}
    print(user_input['db_host'])
    return user_inputsh-4.2$
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
import time
import os

count = 0

session = boto3.session.Session()
region = session.region_name

dynamodb = boto3.resource('dynamodb', region_name=region)
table_name = 'Cast'   # table name
pk = 'year'           # primary key
sk = 'title'          # sort key
file_name = 'cast_full.json'

def create_table():
    try:
        table = dynamodb.create_table(
            TableName=table_name,
            KeySchema=[
                {
                    'AttributeName': pk,
                    'KeyType': 'HASH'  #Partition key
                },
                {
                    'AttributeName': sk,
                    'KeyType': 'RANGE'  #Sort key
                }
            ],
            AttributeDefinitions=[
                {
                    'AttributeName': 'year',
                    'AttributeType': 'N'
                },
                {
                    'AttributeName': 'title',
                    'AttributeType': 'S'
                },
            ],
            BillingMode='PAY_PER_REQUEST'
            #ProvisionedThroughput={
            #  'ReadCapacityUnits': 125,
            #  'WriteCapacityUnits': 125
            # }
        )
        print("Table status:", table.table_status)
    except:
        print("Table exist:Uploading data")
        table = dynamodb.Table('Cast')

def add_table():
    table = dynamodb.Table(table_name)
    count = 0
    with open(file_name) as json_file:
        movies = json.load(json_file, parse_float = decimal.Decimal)
        with table.batch_writer(overwrite_by_pkeys=[pk, sk]) as batch:
            for movie in movies:
                titleId = movie['titleId']
                title = movie['title']
                year = int(movie['year'])
                genres = movie['genres']
                runtimeMinutes = int(movie['runtimeMinutes'])
                cast = movie['cast']
                count = count + 1
                print("Adding record count:", count)
                batch.put_item(
                Item={
                    'titleId': titleId,
                    'year': year,
                    'title': title,
                    'genres': genres,
                    'runtimeMinutes': runtimeMinutes,
                    'cast': cast,
                    }
                )
def main():
    create_table()
    add_table()

if __name__ == "__main__":
    main()
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr

table_name = 'Cast'
pk = 'year'
sk = 'title'

session = boto3.session.Session()
region = session.region_name

class DecimalEncoder(json.JSONEncoder):
    def default(self, o):
        if isinstance(o, decimal.Decimal):
            if o % 1 > 0:
                return float(o)
            else:
                return int(o)
        return super(DecimalEncoder, self).default(o)

dynamodb = boto3.resource('dynamodb', region_name=region)

table = dynamodb.Table(table_name)

fe = Key(pk).between(1990, 1991) & Key(sk).between('A', 'D')
pe = "#yr, title, #ca"
ean = { "#yr": "year", "#ca": "cast",}
esk = None

response = table.scan(
    FilterExpression=fe,
    ProjectionExpression=pe,
    ExpressionAttributeNames=ean
    )

for i in response['Items']:
    print(json.dumps(i, cls=DecimalEncoder))

while 'LastEvaluatedKey' in response:
    response = table.scan(
        ProjectionExpression=pe,
        FilterExpression=fe,
        ExpressionAttributeNames= ean,
        ExclusiveStartKey=response['LastEvaluatedKey']
        )
    #parsing and printing the JSON response
    for i in response['Items']:
        print(i['year'], ":", i['title'] + " and the actors are:")
        for j in i['cast']:
            print(j['name'])
        print('\n')
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr

session = boto3.session.Session()
region = session.region_name
table_name = 'Cast'
pk = 'year'
sk = 'title'

class DecimalEncoder(json.JSONEncoder):
    def default(self, o):
        if isinstance(o, decimal.Decimal):
            return str(o)
        return super(DecimalEncoder, self).default(o)

dynamodb = boto3.resource('dynamodb', region_name=region)

table = dynamodb.Table(table_name)

print("Movies in 2005 - titles A-L, and list of actors")

response = table.query(
    ProjectionExpression="#yr, title, #ca",
    ExpressionAttributeNames={ "#yr": "year", "#ca": "cast" },
    KeyConditionExpression=Key(pk).eq(2005) & Key(sk).between('A', 'L')
)

for i in response['Items']:
        print(i['year'], ":", i['title'] + " and the actors are:")
        for j in i['cast']:
            print(j['name'])
        print('\n')
from __future__ import print_function

import os
import amazondax
import botocore.session
import boto3

my_session = boto3.session.Session()
region = my_session.region_name

session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client

table_name = "TryDaxTable"

params = {
    'TableName' : table_name,
    'KeySchema': [
        { 'AttributeName': "pk", 'KeyType': "HASH"},    # Partition key
        { 'AttributeName': "sk", 'KeyType': "RANGE" }   # Sort key
    ],
    'AttributeDefinitions': [
        { 'AttributeName': "pk", 'AttributeType': "N" },
        { 'AttributeName': "sk", 'AttributeType': "N" }
    ],
    'ProvisionedThroughput': {
        'ReadCapacityUnits': 10,
        'WriteCapacityUnits': 10
    }
}

# Create the table
dynamodb.create_table(**params)

# Wait for the table to exist before exiting
print('Waiting for', table_name, '...')
waiter = dynamodb.get_waiter('table_exists')
waiter.wait(TableName=table_name)
[ssm-user@ip-10-0-1-89 python]$
from __future__ import print_function

import os, sys, time
import amazondax
import botocore.session
import boto3

my_session = boto3.session.Session()
region = my_session.region_name

session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client

table_name = "TryDaxTable"

if len(sys.argv) > 1:
    endpoint = sys.argv[1]
    dax = amazondax.AmazonDaxClient(session, region_name=region, endpoints=[endpoint])
    client = dax
else:
    client = dynamodb

pk = 10
sk = 10
iterations = 50

start = time.time()
for i in range(iterations):
    for ipk in range(1, pk+1):
        for isk in range(1, sk+1):
            params = {
                'TableName': table_name,
                'Key': {
                    "pk": {'N': str(ipk)},
                    "sk": {'N': str(isk)}
                }
            }

            result = client.get_item(**params)
            print('.', end='', file=sys.stdout); sys.stdout.flush()
print()

end = time.time()
print('Total time: {} sec - Avg time: {} sec'.format(end - start, (end-start)/iterations))
 
 

'AWS > AWS Database' 카테고리의 다른 글

AWS DocumentDB  (0) 2022.04.13
AWS Aurora 데이터베이스  (0) 2022.04.13
AWS RDS TSL/SSL connection  (0) 2022.04.13
AWS DB 실습 동영상  (0) 2022.04.13
Amazon RDS 데이터베이스  (0) 2022.04.12
h-4.2$ mysql -h $endpoint -u $user -P $port -p$password mydb
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 35
Server version: 5.7.22-log Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [mydb]> STATUS
--------------
mysql  Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:          35
Current database:       mydb
Current user:           admin@10.0.1.218
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server:                 MySQL
Server version:         5.7.22-log Source distribution
Protocol version:       10
Connection:             mydb.cin1rcsx20ld.us-west-2.rds.amazonaws.com via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:               3306
Uptime:                 2 hours 1 min 13 sec

Threads: 2  Questions: 16909  Slow queries: 0  Opens: 371  Flush tables: 1  Open tables: 57  Queries per second avg: 2.324
--------------


h-4.2$ mysql -h $endpoint --ssl-ca=rds-combined-ca-bundle.pem --ssl-verify-server-cert -u $user -P $port -p$password mydb
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 43
Server version: 5.7.22-log Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [mydb]>
MySQL [mydb]> STATUS
--------------
mysql  Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:          43
Current database:       mydb
Current user:           admin@10.0.1.218
SSL:                    Cipher in use is DHE-RSA-AES256-SHA
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server:                 MySQL
Server version:         5.7.22-log Source distribution
Protocol version:       10
Connection:             mydb.cin1rcsx20ld.us-west-2.rds.amazonaws.com via TCP/IP
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:               3306
Uptime:                 2 hours 7 min 19 sec

Threads: 2  Questions: 17170  Slow queries: 0  Opens: 372  Flush tables: 1  Open tables: 58  Queries per second avg: 2.247
--------------

MySQL [mydb]>

'AWS > AWS Database' 카테고리의 다른 글

AWS Aurora 데이터베이스  (0) 2022.04.13
db test python  (0) 2022.04.13
AWS DB 실습 동영상  (0) 2022.04.13
Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12

http://bit.ly/PD_DBLAB

'AWS > AWS Database' 카테고리의 다른 글

db test python  (0) 2022.04.13
AWS RDS TSL/SSL connection  (0) 2022.04.13
Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12

관계형 RDS 6개

Oracle 상용

MS-SQL 상용

Maria DB Open Source

Mysql Open Source

PostresQL Open Source

Aurora Amazon

 

이점
손쉬운 관리 : 수직적 확장 ( 다운타임 필요)
높은 확장성 : 수평적 확장 가능 ( 인스턴스 수 )
가용성/내구성 : multi AZ / read replica
storage option : EBS
SSL (통신 암호화) KMS (저정시 암호화)

Aurora 
Storage : Limit : 128T auto, 그외 64T 까지 자동확장

Read Replica
기본 인스턴스당 최대 5개까지의 읽기 전용 복제본

다중 AZ 배포 Read Replica : 읽기 복제본
동기식 복제  
   
   

 

'AWS > AWS Database' 카테고리의 다른 글

AWS RDS TSL/SSL connection  (0) 2022.04.13
AWS DB 실습 동영상  (0) 2022.04.13
Database 워크로드 요구사항  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12
Database 특성 분석  (0) 2022.04.12
   
데이터 스토리지  
데이터 사용  
데이터 볼륨, 속도, 다양성  
데이터 수명  
내구성, 가용성  
성능 필수 지연 시간
OPS ( InOutPerSecond)
읽기/쓰기 처리량
동시성
용량/규모 ScaleIn/ScaleOut
Computing Power
Memory Capacity
고가용성 복제
클러스터링
멀티 인스턴스
   

'AWS > AWS Database' 카테고리의 다른 글

AWS DB 실습 동영상  (0) 2022.04.13
Amazon RDS 데이터베이스  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12
Database 특성 분석  (0) 2022.04.12
Database 유형  (0) 2022.04.12
ACID BASE
원자성 A : Atomicity
일관성 C : Consistency
격리성 I : Isolation
내구성 D : Durability

구조화된 데이터베이스에서 일관성 및 무결성을 유지하는 방법


Basically Available Softstate Eventually consistent
기본적 가용성, 소프트 상태, 최종 일관성
구조화 또는 반구조화된 데이터베이스에서 일관성 및 무결성을 유지하는 방법
Basically Available
한 시스템의 변경 사항을 즉시 사용 가능
Softstate
부분 일관성 허용
Eventually consistent
모든 시스템이 최종적으로 모든 변경 사항을 수신
NoSQL 같은 비관계형 데이터베이스에서 데이터 무결성 지원

'AWS > AWS Database' 카테고리의 다른 글

Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12
Database 특성 분석  (0) 2022.04.12
Database 유형  (0) 2022.04.12
AWS Database 8 종  (0) 2022.04.12
CAP이론 PIE 이론
- 일관성 :  Consistency
클라이언트가 항상 동일한 데이터 읽기를 보장
동일한 응답을 생성하도록 보장
- 가용성 :  Availability
클라이언트가 항상 읽고 쓸수 있어야 함
모든 요청이 처리할 수 있도록 보장
- 파티션내성 : Partition Tolerance
데이터 전송의 일부가 손실되더라고 데이터베이스가 계속 작동

CA : 비분산 관계형 데이터베이스 시스템
PA : 분산 비관계형 데이터베이스 시스템
CP : 분산 관계형 및 비관계형 데이터베이스 시스템
- 패턴(쿼리) 유연성 : Pattern (query) Flexibility
시스템의 임의 엑세스 패턴 및 일회성 쿼리 지원
- 무한한 확장 : Infinitie Scale
시스템 파티션 손실시에도 제한 없이 확장
- 효율성 :  Efficiency
시스템이 항상 필요한 지연 시간을 제공하도록 보장

PE : 관계형 Line of Business 데이터베이스 시스템
IE : 비관계형 전자 상거래 또는 스트리밍 시스템
PI : 관계형 분석 데이터베이스 시스템

 

 

'AWS > AWS Database' 카테고리의 다른 글

Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12
Database 유형  (0) 2022.04.12
AWS Database 8 종  (0) 2022.04.12

 

유형 특징 적용분야 관련DB
 관계형 - 참조무결성
- ACID
- 트랜잭션
- 스키마 온 라이드
트랜잭션워크로드
전자상거래
소셜 대부분
CRM
재무
Amazon RDS
Amazon Aurora
Amazon RedShift
비관계형 key value - 높은 처리량
- 짧은 지연시간
- 무한한 확장
비트랜잭션
장바구니
고객선호
Amazon DynamoDB
비관계형 document 문서를 저장
모든 특성을 쿼리하고 신속하게 엑세스
컨텐츠 관리
개인화
모바일
Amazon DocumentDB
비관계형 in-memory 마이크로초 단위의 지연
키를 사용하여 쿼리
순위표
실시간 분석
캐싱
Amazon ElastiCache
비관계형 graph 데이터 간의 관계를 
빠르고 쉽게 생성하고 탐색
부정행위탐지
소설네트워킹
추천엔진
Amazon Neptune
비관계형 원장 애플리케이션 데이터의
모든 변경에 대한 완전하고
변경 불가능한 기록
레코드 시스템
공급망, 의료
재무
Amazon QLDB
       

'AWS > AWS Database' 카테고리의 다른 글

Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12
Database 특성 분석  (0) 2022.04.12
AWS Database 8 종  (0) 2022.04.12

1.  AWS EC2 Database

2. Amazon RDS

3. Amazon DocumentDB

4. Amazon DynamoDB

5. Amazon Neptune

6. Amazon Quantum Ledger Database (QLDB)

7. Amazon ElastiCache

8. Amazon RedShift

 

'AWS > AWS Database' 카테고리의 다른 글

Amazon RDS 데이터베이스  (0) 2022.04.12
Database 워크로드 요구사항  (0) 2022.04.12
트랜잭션 속성  (0) 2022.04.12
Database 특성 분석  (0) 2022.04.12
Database 유형  (0) 2022.04.12

+ Recent posts