AWS Interview Questions and Answers
Q1. When designing an architecture for an intended solution, what is the role of AMI?
Amazon Machine Images (AMIs) serves as templates of virtual machines and an instance in the virtual cloud is obtained from an AMI. AWS offers pre-processed AMIs which we can opt for while we are launching a virtual instance, some AMIs are not free, therefore can be bought from the AWS cloud market. We can choose to generate our own customized AMI which would support us save space on AWS. If we are not in need of a set of software on the installation, it becomes easier to customize the AMI to do that. This makes it cost-efficient since we could remove the redundant things.
Q2. What to do in case of AWS direct connection failure?
When a connection failure occurs, if a backup AWS direct connect configuration is set up, the direct connect will switch over to the second one. It is important to turn on Bidirectional Forwarding Detection (BFD) when configuring the connections to confirm quick detection. Subsequently, if a backup IPsec VPN connection setting is configured all Virtual Private Cloud (VPC) traffic will failover to the backup VPN connection mechanically. Traffic to and from public resources like Amazon S3 will be routed over the Internet. If the backup AWS Direct Connect link is absent or an IPsec VPN link is missing, then Amazon VPC traffic will be given up in the event of a failure.
Q3. Distinguish Amazon RDS, DynamoDB, and Redshift?
- Amazon Relational Database Service (RDS) is a database management service for relational databases. RDS monitors patching, upgrading, and backup of data of databases without the intervention of the user. RDS manages only the structured data.
- DynamoDB, meanwhile, is a NoSQL database service and NoSQL could deal with unstructured data.
- Redshift is a completely different service and it serves as a warehouse for data and is used in data analysis.
Q4. How is data to Amazon Redshift loaded from other data resources?
To pull the data together and load it from Amazon EC2, DynamoDB, and Amazon RDS, we need to use the COPY command to load data in sequential order, directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
AWS Data Pipeline provides a high-octane performance, reliable, fault-tolerant solution to load data from a variety of AWS data resources. AWS Data Pipeline can be availed to specify the data source, desired data transformations, and then execute an already composed import script to load the data into Amazon Redshift.
Q5. How to transfer the currently existing domain name registered to Amazon Route 53 without affecting the web traffic?
First, DNS record data for the domain name should be in order. This record is generally available in the form of a “zone file” that is obtainable and it can get from the existing DNS provider. Once the DNS record data is received, Route 53’s Management Console or simple web-services interface can be used to generate a hosted zone that will store the DNS records for the domain name and pursue its transfer process. Also, there are steps such as updating the name servers for the domain name to the ones connected with the hosted zone. For finishing the process we need to contact the registrar with whom the domain registration is done and following that transferring process is the must. As the registrar starts propagating the new name server delegations, the DNS queries will begin to get answered.
Q6. What tools are used to spin-up the servers?
To spin-up the existing servers, we need to roll our own servers and use the AWS Application Programming Interface tools. That scripts are written in bash, Perl or other languages of user’s choice.
By using a configuration management systems and feeding tools like puppet or its next generation of Opscode Chef. A tool like Scalr also can be used.