NAT Instances and NAT Gateways

VPC creation steps:

Create VPC
create 2 subnets
attch internet gateway to vpc.(create a new internet gateway)
Create a public route table(make main route table as private all the time)(route out to the internet)\
associate subnet with the newly created route table to make it public.
enable “auto assign public ip” for the public subnet.
launch EC2 instances in public subnet and private subnet
copy the myEC2key content and create a new key file on the public subnet ec2 instance
create a NAT instance by selecting from community EMI’s and put it in public subnet
disable source/destination check for the NAT instance.
you need to create a route from private subnet out thru NAT instance to the internet
make changes to the main route table,add other route to the newly created NAT instance
delete NAT instance
create a NAT gateway(always deploy into public subnet)(create an elastic IP to NAT gateway)
with NAT gateways you don’t need to disable source/destination check, dont need to put behind SGs

NAT instance is a single point of failure
if you put the NAT instance behind an auto scaling group, it helps. But NAT gateway is better choice than NAT instances
when you are creating NAT gateway you always need to put it in the public SN and always update route table to point the instances to the NAT gateway
you don’t need to put it behind SG’s and apply security patches
implemented with redundancy
NAT GW supports 10Gbps bandwidth
NAT GW is automatically built in
you can use NACL’S with NAT GW but not SG’s
NAT GW’s are always used in production

Exam Tips: NAT instances
*When creating a NAT instance disable source/destination checks on the instance
* NAT instance must be in the public subnet
* There must be a route out of the Private SN to the NAT instance, in order for this to work.
* when you deploy a NAT you must allocate public IP for it.
* The amount of traffic that NAT instance supports, depends on the instance size. If you are bottlenecking, increase the instance size.
* you can create HA using Autoscaling groups, multiple subnets in different AZ’s and a script to automate failover from one NAT instance to another.
* NAT instances are always behind a security group

Exam Tips:NAT Gateways
* preferred by enterprise
* scale automatically upto 10Gbps
* No need to patch
* Not associated with security groups
* automatically assigned public IP address
* route tables must be updated with NAT gateway
* No need to disable Source/Destination checks

*

EC2 101

      • EC2 – Elastic Compute Cloud
      • Provides resizable compute capacity in the cloud
      • Reduces the time required to obtain and boot new server instances to minutes which allows you to quickly scale capacity, both up and down as per the requirement changes
      • With public cloud, you can provide virtual instances within seconds
      • Pay only for what you use
      • EC2 options:
        • On Demand:  pay fixed amount by the hour with no commitment
        • Reserved:  provide you with a capacity reservation, and offer a significant discount on the hourly charge for an instance. ! Year or 3 year terms. Mostly used for steady state servers. Longer the contract longer the discount.
        • Spot:  like stock market. Supply and demand. You set a bid price for the instance, if the bid price equals to spot price, then you win the instant. Spot prices vary all the time. If the spot price exceeds the bid price, AWS gives you an hour notice and terminate the instance. Big pharmaceutical or genomic companies use spot instances for bigdata processing, grid computing or high performance computing to save money.
      • On Demand:
        • Users who want the low cost and flexibility of Amazon EC2 w/o any up-front payment or long-term commitment
        • Applications with short term, spiky, or unpredictable workloads that cannot be interrupted
        • Applications being developed or tested on Amazon EC2 for the first time
        • To supplement reserved instances
      • Reserved:
        • Applications with steady state or predictable usage
        • Applications that require reserved capacity
        • Users able to make upfront payments to reduce their total computing costs even further
        • The more you pay upfront, the longer the contract the costs will be cheaper
      • Spot:
        • Applications that have flexible start and end times
        • Applications that are only feasible at very low computing prices
        • Users with urgent computing needs for large amounts if additional capacity
        • If the spot instance is terminated by Amazon EC2, you will not be charged for a partial hour. However, if you terminate the instance yourself, you will be charged for any hour which the instance ran.

 

 

 

Amazon Import/Export

      • 2 types
        • Import/Export Disk
        • Import/Export Snowball
      • Import/Export Disk
        • Moves large amounts of data into and out of AWS using portable devices
        • Transfers your data directly onto and off of storage devices using Amazon’s high- speed internal network and bypassing the internet.
        • Buy a disk, transfer data onto the disk, send it to AWS, they will upload it to S3 or Glacier or EBS, then send the disks back to the customer.
        • Allows Import to S3, EBS and Glacier
        • Allows Export from S3

      • Snow ball
        • Looks like a briefcase
        • Peta byte scale data transfer
        • Each snowball is capable of carrying 50 TB or more
        • Uses multiple layers of security designed to protect your data including tamper-resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module(TPM) designed to ensure both security and full chain-of-custody of data.
        • Once the data transfer job has been processed and verified, AWS performs a s/w erasure of the Snowball appliance
        • You don’t own a snowball, just for rent
        • Only available in US
        • Only works with S3
        • Multiple appliances can be used in parallel for larger workloads.
        • Allows Import/Export to only S3.

Amazon Storage gateway

      • Connects on premise s/w appliance with cloud based storage
      • Is a virtual appliance which can be installed on hypervisor
      • Available as VM image and install it in datacenter
      • Supports either Vmware ESXi or Microsoft Hyper-V
      • Once installed and associate with AWS account thru the activation process,you can use the AWS console to create the storage gateway option which is suitable
      • Three types of storage gateways:
        • Gateway Stored Volumes: keep your entire data set on site.
          • Storage gateway then backs this data up asynchronously to Amazon S3.
          • Gateway stored volumes provide durable and inexpensive off-site backups that you can recover locally or from Amazon EC2.
        • Gateway Cached Volumes: your most frequently accessed data is stored locally
          • Entire dataset is stored in S3.
          • You don’t have to go out and buy large SAN arrays for your office/datacenter which will result in cost savings.
          • If you lose internet connection, customer will not be able to access all the data.
        • Gateway Virtual Tape Library: customer can have a limitless collection of virtual tapes.
          • Each virtual tape can be stored in a Virtual Tape Library backed by Amazon S3 0r Virtual Tape Shelf backed by Amazon Glacier.
          • VTL exposes an industry standard iSCSI interface which provides your backup application with on-line access to the virtual tapes.  Supported by Netbackup, Backup Exec, Veam etc.
      • Gateway storage connects either with Internet or DirectConnect.

 

S3- Security and Encryption

      • By default all S3 buckets are “private”
      • You can setup access control to buckets using
        • Bucket Policies – entire bucket
        • Access Control Lists – to individual objects in the bucket.
      • S3 buckets can be configured to create access logs which log all requests made to the S3 bucket. This can be done to another bucket or also to a bucket in other AWS account(cross account access
      • Encryption: 4 different methods and 2 types of encryption
        • In Transit – when sending info to and fro from the bucket. Secured using SSL/TLS(https)
        • Data at Rest – 4 different methods
          • Server Side Encryption
            • SSE-S3:  S3 managed keys. Each object is encrypted with a unique key employing strong MFA. Amazon also encrypts this key itself with a master key which rotates very often. Amazon S3 manages all the keys using AES-256
            • SSE-KMS: AWS Key Management Service. Similar to S3 but extra features and additional charges. There is an envelope key which protects the data’s encryption key. This provides an added protection against unauthorized access to objects. Also provides audit trail(who is using key and when is the key used). Option to create and manage encryption keys by the user itself.
            • SSE-C: Customer Provided keys. In this case, user manages the encryption key and amazon manages the actual encryption and decryption.
          • Client Side encryption: Data is encrypted at client site and uploaded it to S3

 

S3- Security and Encryption

  • By default all S3 buckets are “private”
  • You can setup access control to buckets using
    • Bucket Policies – entire bucket
    • Access Control Lists – to individual objects in the bucket.
  • S3 buckets can be configured to create access logs which log all requests made to the S3 bucket. This can be done to another bucket or also to a bucket in other AWS account(cross account access
  • Encryption: 4 different methods and 2 types of encryption
    • In Transit – when sending info to and fro from the bucket. Secured using SSL/TLS(https)
    • Data at Rest – 4 different methods
      • Server Side Encryption
        • SSE-S3:  S3 managed keys. Each object is encrypted with a unique key employing strong MFA. Amazon also encrypts this key itself with a master key which rotates very often. Amazon S3 manages all the keys using AES-256
        • SSE-KMS: AWS Key Management Service. Similar to S3 but extra features and additional charges. There is an envelope key which protects the data’s encryption key. This provides an added protection against unauthorized access to objects. Also provides audit trail(who is using key and when is the key used). Option to create and manage encryption keys by the user itself.
        • SSE-C: Customer Provided keys. In this case, user manages the encryption key and amazon manages the actual encryption and decryption.
      • Client Side encryption: Data is encrypted at client site and uploaded it to S3