Ask me anything
devopsguru avatar
1 year ago

The Definitive Guru of Dev Ops

@devopsguru
John Varghese is a Cloud Steward at Intuit for both the AI and Futures Business Units. He drives the adpotion of new technologies and sound DevOps practices. John runs the Bay Area AWS Meetup group focusing on the Peninsula and the South Bay. Tweet him at @jvusa.
63 Posts
Text
devopsguru
devopsguru

Openssl Legacy mode

Let’s say you are trying to create an event ticket in Apple Wallet. And you need to work with certificates and PassIDs. When you try to run this command

openssl pkcs12 -in yourapp.pass.certificate.p12 -nocerts -out yourapp.pass.private.key

or this command

openssl pkcs12 -in yourapp.pass.certificate.p12 -clcerts -nokeys -out yourapp.pass.certificate.pem

But you get this error

Error outputting keys and certificates
C0AF8548F87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:355:Global default library context, Algorithm (RC2-40-CBC : 0), Properties ()

You can overcome it by just using what I call “legacy mode”. Just add the option -legacy at the end of the command.

For example the second command would now become

openssl pkcs12 -in yourapp.pass.certificate.p12 -clcerts -nokeys -out yourapp.pass.certificate.pem -legacy

Text
devopsguru
devopsguru

Nothing is impossible in DevOps. We will have it done by Q4.

Text
devopsguru
devopsguru

How do I find the right AMI to start with?

Alright. Assume you want to find the correct AMI for Ubuntu version 20.04 LTS to use in Oregon on AWS.

Step 1. Go to https://cloud-images.ubuntu.com/locator/ec2/

Step 2. Scroll to the bottom and filter the Zone, Version, and Architecture.

Text
devopsguru
devopsguru

So you want both versions of Java?

Ok. Now that you have installed Java 11 using corretto with these instructions from earlier this week, you find that some of your old programs are still dependent on Java 8.

Good news. You can have both Java 1.8 (aka Java 8) and Java 11 installed at the same time. Follow the same steps as you did for Java 11, except pick Corretto 8. So now you have both Javae (sic) installed.

If you were to run the command /usr/libexec/java_home you will still see java 11 is active. /Library/Java/JavaVirtualMachines/amazon-corretto-11.jdk/Contents/Home If you type java -version it will still say java 11 is running. You can make Java 8 active by running this command export JAVA_HOME=$(/usr/libexec/java_home -v '1.8')

In your ~/.bash_profile you can switch between the versions you want by commenting out the one you don’t want. Here is one such profile with both JAVA_HOMEs listed and only one active.

export M2_HOME=/usr/local/apache-maven-3.6.1/bin/ export JAVA_HOME=$(/usr/libexec/java_home -v '11') # run this for Java 11 # export JAVA_HOME=$(/usr/libexec/java_home -v '1.8') # run this for Java 8 export PATH=$PATH:$JAVA_HOME:$M2_HOME

You can copy it from here

Text
devopsguru
devopsguru

Installing Java (aws corretto) and maven (mvn) on mac in a jiffy

Install Java

If you are a fan of a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit then amazon’s corretto is the java you need. This is how I installed it on a new laptop that did not have java. I’m not a very frequent user of java, so if you already have a java installed, you probably know how to make it inactive.

  1. Go to https://aws.amazon.com/corretto/
  2. I saw two buttons - “Download Amazon Corretto 8″ and “Download Amazon Corretto 11″. I picked 11.
  3. It downloads a pkg file. Just double click it.
  4. Now in a terminal type java -version

That was it. Where previously it said java was not installed, I got

java -version openjdk version "11.0.3" 2019-04-16 LTS OpenJDK Runtime Environment Corretto-11.0.3.7.1 (build 11.0.3+7-LTS) OpenJDK 64-Bit Server VM Corretto-11.0.3.7.1 (build 11.0.3+7-LTS, mixed mode)

Sweet.

Install Maven

To install maven, I just did this. These steps are generic and will work for you as well.

  1. Google for “download maven”
  2. Click on the first result (indented) that says “Download Apache Maven”
  3. In the resulting page, scroll down to “Binary zip archive” and click on the hyperlink in the column “Link”
  4. A zip file will be downloaded. 
  5. Double click it to uncompress it. In my case the directory was called “apache-maven-3.6.1″
  6. Move the uncompressed directory to /usr/local/ (or anywhere else you prefer to leave it installed.)

That’s it for the installation.

Configure them both

Now for the configuration:

If you use bash (like most people) edit (or create a new) ~/.bash_profile and add these three lines.

export M2_HOME=/usr/local/apache-maven-3.6.1/bin/ export JAVA_HOME=$(/usr/libexec/java_home) # this eval always picks the latest java export PATH=$PATH:$JAVA_HOME:$M2_HOME

Copy from here

Of course, remember to use the appropriate directory name.

If, like me, you use zsh, then add/update your .zshrc with the same 3 lines.

Now the final test. These commands will give reasonable answers.

  1. java -version
  2. mvn -version
  3. echo $JAVA_HOME
  4. echo $M2_HOME

Bada bing, bada boom. Two minutes total. Or if you are like me, then about 15 minutes.

Text
devopsguru
devopsguru

My prediction for the next 5 years

Today is May 1, 2019. Roughly 10 years ago, I predicted that the future would be dominated by Mobile and the Cloud. I suppose that was not too hard to guess. Well I am only so smart. So here is my next easy prediction.

This prediction is in terms of software development. Hmm… actually software is eating the world. So I guess this will be valid everywhere.

I see a decline in the popularity of DevOps. I think that is temporary. Maybe its name will change. But the culture promoted by the DevOps movement is here to stay and is the only way to a better collaborative future. I think it will become more pervasive and spread to other fields - like TQM came from manufacturing to software engineering, DevOps will spread from Software Engineering to all the other fields.

Security will become top of mind for everyone everywhere. Especially security of customer data. If we don’t build security in from the start of every product, it won’t make it to the next level. You would not want to retrofit security. In fact, security consciousness will become so ubiquitous, that they will be table stakes in any generated template for any framework or app.

Serverless applications will take over the world. And prices will drop even further. Today there are several use-cases where serverless does not make sense. In many instances, this is because of limitations in the way serverless is architected. It is possible that this underlying architecture might not change drastically. But if that will be stable, then the applications will change to take advantage of the serverless architecture. There are still a lot of problems with implementing serverless, but that is because it is still in its infancy. But the promise is real.

Artificial Intelligence Applications. This is not AI like in the movies. But AI like in real life. Everyday new applications are coming out that leverage machine learning, deep learning, and the like. This is only going to increase.

All this will be true on April 30, 2024. But these things are all here already. What can I predict that does not really exist beyond its infancy today but will become mainstream in 2024 or by then? That’s a tall order, and something I am going to take a daring guess on. Let me see… hmm… it will probably have to do with history repeating itself. What can it be? I would hate it if self driving cars are still not approved legislatively. Air traffic will be just as hard to manage if not harder. I would have probably taken a ride in a kitty hawk mobile. But what is not a promise today, but will be a reality. Something disruptive. Will it be in education? Got it. I think databases will take care of themselves - not needing DBAs to manage them anymore. Nah… too optimistic. Well, I guess the only thing that is certain other than change, taxes, and death is that history will repeat itself. So that is my prediction. Some things will happen in the next five years that is not logically obvious now, but in retrospect will look inevitable and darn obvious!

Text
devopsguru
devopsguru

AWS re:Invent minus the sessions

“I’m going into airplane mode,” I texted my wife. Now I could relax for an hour or so. The Uber driver was smart. When he saw the half mile long line of cars in the departure lane, he had asked me, “Shall I drop you at arrivals? You just need to go up the escalator.”

“Sure,” I said. Glad to avoid wasting 15 minutes. I had come up the escalator a few minutes later and found a long line at security. It was to be expected. re:Invent is always the Monday after Thanksgiving. Everyone is going back home. What should have normally taken 10 minutes took 45. I barely made it to the gate in time. As luck would have it, the flight was delayed. So I had just enough time to buy a packaged breakfast and hasten back to board.

I have been going to re:Invent since 2016. Each year there were 10,000 more people than the previous year. This year there were 50,000 attendees. The event was spread across 14 hotels in Las Vegas.



Despite the spread, every breakout session was crowded. I will not go into the details of how you have to stand in line for half an hour to get into any session. You can read that on plenty of other blogs. In fact, if you just want to watch videos of the talks I recommend heading straight to https://reinventvideos.com/. You can search and filter by level all AWS re:Invent talks from 2012 there.

Walking between hotels takes almost half an hour if they are adjacent - because you have to walk through the casinos. If they are not adjacent, you have to take the shuttle, which I think is approximately every half hour. So I did not really go to any other hotel. I had decided to spend all my time at the Sands Expo/Venetian and that is what I did. In fact, next year, I am going to even stay at the Venetian. Just for bragging rights - the Venetian gets sold out first. And most of my events happen there.

If you cannot get a room at the Venetian though, I’ll recommend a wonderful second choice. The Wynn Towers. I’m not sure if the Palazzo has this advantage, but if you stay at the Wynn Towers (not the Wynn) you can come out the Towers entrance and go straight to the Sands Expo in less than 10 minutes without walking through a single casino. I think the Palazzo will probably have you walk in front of a store or two. But from the Wynn Towers - nothing. The Sands Expo is right across the street and around the corner.

This year at the Expo hall there was a developers lounge. We had a small section here that was manned by user group leaders. User group leaders from around the world took turns providing information about local communities to help bring aws users together. Which reminds me about the AWS Community Day I helped organize in the Bay Area this year. I had wanted to know more about how they did the community day in Japan - JAWS is one of the most active AWS communities - so I had traveled to Japan this summer. There I had met Shigeru Numaguchi. I met him again at the user group booth.


Normally, you just need to wear a t-shirt to these events. But on Monday when I took this picture with Shigeru, I had to attend the AWS heroes annual get together. That’s why I am wearing a shirt. We had Jeff Barr, Ian Massingham, Werner Vogels and other dignitaries join us at the Heroes Welcome.

This year we also had a few distinguished women in tech join us at the Hero dinner on Monday night. They were from India, Japan, Brazil, and Korea. It was a sad testament to the state of affairs in our industry to see that less than 15% of the people at the dinner were women. And that despite us having specially invite some of them. Here is a photo I took with all the women at the dinner. You can see Randall Hunt in the background. He is one of my favorite evangelists.

The dinner was at The Foundation Room. Now I capitalize it, as though it should mean something to you. It likely does not. I had no idea what it was. But trust me, everyone working in Las Vegas knows where it is. It’s entrance is at the bottom of Mandalay Bay. When you get in, the only thing in the room is an elevator. When you are inside the elevator, it goes only to the 62nd floor, The Foundation Room above it, and back down to this entrance. And if you did not know that this room is at the top of the Mandalay Bay (which I didn’t,) the view is spectacular. Well now that I know it is on the top, the view is just as great!

Tuesday night is traditionally the night that the Intuit folks get together. This is one time of the year when I get to meet people from other Intuit sites in person. These are people with whom I have interacted via chat from Mountain View. This year, we met at this place called Minus Five. It was a room lined with ice on the walls, and I imagine maintained at a temperature of -5 degrees celsius. It was like a freezer. When we were outside the ice room we were Intuit people. When we went inside we were like Inuit people - if you know what I mean. Especially with our jackets. We had heavy jackets. But it was still uncomfortable. Later on I came in without a jacket and was fine. I only stayed there for 5 minutes each time. Fortunately, they had another room just outside for us to stand around and have hors d'oeuvres for those of us who did not want to be cold. Here is a picture of the inside of Minus Five.


It was very noisy at Minus five. Gary Danko in the photo above and I wanted to have dinner at a nice quiet place. We walked around but found no quiet place. So we just made a reservation at Delmonico’s for the next afternoon and went back to our rooms and ordered room service.

The infrastructure keynote usually happens on Tuesday night. This year they had it on Monday night. I missed it live because I was at the heroes dinner. Wednesday morning is when Andy Jassy’s keynote happens. This is the keynote where they announce most of the new services every year. There is usually a long line that starts forming around 7AM for the keynote starts at 8 AM sharp. Suffice it to say I got to sit very near the front and center of the stage.



The view from the front is always great. I won’t bore you with the details of what was announced. You can see that, and all other new service and feature announcements here. https://aws.amazon.com/new/ The hardware release this year was the Deep Racer. https://aws.amazon.com/deepracer/ It is uses deep learning and reinforcement learning. My neighbour and I commented on the coincidence in the name. We said, “IBMs computer that beat humans at chess was called Deep Blue. Google’s computer that beat humans at go was called Deep Mind.” Now Amazon released Deep Racer. I know it does not seem connected, but when Andy was making the announcement, it seemed very connected. Especially before he mentioned the name. You should watch the keynote for five minutes or so before the announcement of the Deep Racer.

That reminds me, there was something special in the keynote. You cannot see it on youtube because that video does not show all seven movie sized screens. Apparently in the very first slide-set after the title slide, there was a picture of me. I say slide-set because in the “slide” after the title slide was 7 different slides on the 7 different screens. Many of them showing pictures of AWS users. One of the heroes, Margaret, told me that she saw me in one of the screens. Easter egg.

Wednesday evening was the annual user group leaders get together. We shared tips and said hi to each other. Some of us gave short talks. I did one too.

Thursday morning as usual was allocated to Werner Vogels. Got to sit in front on Thursday as well. I like listening to Werner talk. He talks about technical challenges and solutions.

On Thursday, I also visited the executive summit. Raji Arasu, one of Intuit’s executives was giving a presentation to the executives. She also had a fireside chat with the VP of ML at AWS - Swami Sivasubramanian.



I sat in the back because I came late and did not want to disturb the audience. Did you know that for every person who presents at re:Invent, 10 others don’t get an opportunity to present? Have you wanted to present at re:Invent but could not? You can increase your chances of getting accepted by presenting elsewhere first.

You can present at the AWS Community Day. There are several of them each year all over the world. Here is a list https://aws.amazon.com/events/community-day/. If you would like to present at the community day in the Bay Area we have opened up the call for papers. You can submit your proposal now.

I got involved with AWS Community Day two years ago. We organized the first ever ACD outside of Japan in San Francisco. After that it has taken off globally giving local and regional audiences everywhere an opportunity to come together once a year and talk about their AWS experiences. ACD gives speakers new and experienced a stage from which to share their stories of successes and failures - to teach and to learn. You should take advantage of it. Sign up for the next one - either as a presenter or as an attendee (registration page coming soon).

About the Author:

John Varghese is a Cloud Steward at Intuit responsible for the AWS infrastructure of Intuit’s Futures Group. He runs the AWS Bay Area meetup in the San Francisco Peninsula Area for both beginners and intermediate AWS users. He has also organized multiple AWS Community Day events in the Bay Area. He runs a Slack channel just for AWS users. You can contact him there directly via Slack. He has a deep understanding of AWS solutions from both strategic and tactical perspectives. An avid AWS user since 2012, he evangelizes AWS and DevOps every chance he gets.

Photo
devopsguru
devopsguru

If you came here from the AWS Heroes blog post, you are one of the first to see this new logo we created for https://www.awsadvent.com/. AWS Advent is an annual series of 24 blog posts published between Dec 1 and Dec 24.


The posts are written by the community and are for the community. We, the community, want to learn from your experiences. What do you have to contribute? Where is your article?


I’m sure you have some wonderful experiences that you can share. It is easy to contribute, and we have a team of editors who will help make your article ship-shape. Just do this. Propose your article here. Just fill out that form with your proposal. Then Jennifer and I will help you get started in the right direction. When your article is ready, we will help tweak it to perfection.

photo
Text
devopsguru
devopsguru

Why you should contribute to AWS Advent

Keeping up with the news

Hardly a day goes by when AWS has not introduced something new to the world. Every year thousands of new features are announced. Keeping up with the changes and best practices is a full time job. Relying on publications like Last week in AWS keeps me sane. I can get an entire week’s summary of what I need to read (although I don’t read all the links.)

An Advent Calendar

But every December I would like to relax and read 25 curated and focused articles before I go on vacation - one article a day. And sure enough, many technology platforms have started a yearly tradition for the month of December, revealing an article per day, written and edited by volunteers in the style of an advent calendar. (An advent calendar is a special calendar used to count the days in anticipation of Christmas starting on December 1.)

AWS Advent

AWS Advent is one such calendar that explores services in the Amazon Web Services platform. I have enjoyed the articles from past calendars - written by actual users from the community. Here are some of the truly inspiring articles of yore on diverse topics ranging the breadth of AWS services.

  1. When the Angry CFO Comes Calling: AWS Cost Control
  2. Securing Machine access in AWS
  3. Getting Started with CodeDeploy
  4. Paginating AWS API Results using the Boto3 Python SDK
  5. Alexa is checking your list
  6. AWS network security monitoring with FlowLogs
  7. Taming and controlling storage volumes
  8. Session management for Web-Applications on AWS Cloud
  9. Limiting your Attack Surface in the AWS cloud
  10. Protecting AWS Credentials

Write your article for the advent calendar

There are plenty more where they came from. These articles seem timeless. But times change. We want to keep up with the changes. And we want to learn from your experiences. What do you have to contribute? Where is your article?

I’m sure you have some wonderful experiences that you can share. It is easy to contribute, and we have a team of editors who will help make your article ship-shape. Just do this. Propose your article here. Just fill out that form with your proposal. Then Jennifer and I will help you get started in the right direction. When your article is ready, we will help tweak it to perfection.

2 down. 23 to go.

We need 25 articles at least. We already have two. That means there is a good chance your article will be accepted. We will continue to accept submissions until the calendar is full.

If you have questions just tweet me or Jennifer.

Text
devopsguru
devopsguru

Deleting an RDS cluster

How do you delete an RDS cluster when everytime you click on Actions, the “Delete cluster” menu item is disabled. Like this…


You can do what I did. Delete all the instances one at a time. The cluster will disappear when the instances are all gone.

Text
devopsguru
devopsguru

Second annual AWS Community Day coming to the Bay Area on September 12


Last year 700 of us came together to share our recommendations and common practices when working in AWS. We’re bringing the group back together, this time at the iconic Computer History Museum in the heart of Silicon Valley on September 12, 2018 for a full day of talks, networking, and workshops. While the venue is smaller (meaning we can only comfortably accommodate 400 to 500 people), we will have more targeted space to network and collaborate.

Our format is changing from last year based on lessons learned. There will be a single presentation track comprised of 20 minute talks in the Hahn auditorium, and an unconference track in the Grand Hall after lunch allowing folks to come together in the community to talk about the topics impacting them now whether it’s the Internet of Things, Containers, Serverless, Machine Learning, Big Data, or Cloud Architecture. We also have a workshop track (limited to 30 people per session) where we will have a selection of deep dives on how to use specific tools and techniques.

There will be an experts lounge to connect with AWS experts on a variety of topics. If you have specific questions about implementing some services, or want to deep dive into a discussion with others familiar with the topic, you can do so here. If you are yourself an expert, this is also where you can come and help others navigate the trenches. The experts lounge is also a great place to connect with our regional AWS Community Heroes.

We have a number of great speakers lined up for our presentation track including AWS Technology Evangelist Arun Gupta, and cyber security leader and AWS Community Hero Teri Radichel. More of our speakers to be announced soon!

AWS Technology Evangelist Arun Gupta will kick off our event with an inspirational talk about innovations at AWS that have paved the path for Amazon on its journey to becoming an industry leader.

Cyber security leader and AWS Community Hero Teri Radichel will show us how to defend ourselves against cyber criminals using real-world examples and tools, and how to leverage automation to make it repeatable and secure.

AWS Community Hero Peter Sankauskas will walk through of how they built a Lambda and CloudFront based image processor/thumbnail generator. This will be open sourced at this event as well.

Join us for registration and breakfast starting at 8am. We’ll also have a number of snacks throughout the day, including a nacho bar in the afternoon! Lunch will also be included thanks to our sponsors. We’ll close out the day and share what we learned from 5-6pm.

Two wonderful additions this year thanks to our sponsors will be video recording (presentations will be available on YouTube) and live closed captioning posted to the right of the stage.

Everyone is invited to join us at this years AWS Community Day, but must be registered due to our limited capacity. Please do share this link with your friends and coworkers. Register right now and we’ll save a seat for you!

Mark your calendar! September 12, 2018. 8AM to 6PM.

Twitter: After you register follow us and retweet @awscommunityday.





About the authors:

John Varghese

John Varghese is a Cloud Steward at Intuit encouraging the use of new technologies and sound DevOps techniques. He supports the Advanced Technology group in their adoption of AWS.Separately, John runs the Bay Area AWS Meetup group focusing on the Peninsula and the South Bay. If you’re interested in connecting with John about either his work at Intuit or joining the AWS Bay Area Community, you can slack him directly by joining the AWS-Users-Slack workspace or tweet him at @jvusa.

Jennifer Davis

Jennifer Davis is the co-author of Effective DevOps. In her day job, she is a senior site reliability engineer at RealSelf. Jennifer speaks about DevOps, tech culture, and monitoring and gives tutorials on a variety of technical topics. She co-organizes the AWS Advent event and is looking for folks who are interested in sharing their expertise with different topics in AWS. Follow her on twitter @sigje.

Text
devopsguru
devopsguru

SSL for Wordpress on LightSail

The ssl of different bitnami apps is differently done. The only way that worked easily for the wordpress installation was this instruction set.

https://medium.com/unicorn-supplies/ssl-for-aws-lightsail-wordpress-8053359a774f

Very thorough and correct.

Photo
devopsguru
devopsguru

What is Devops? It is a culture that emphasizes an increased collaboration between the roles of development and operations. This is fostered by an attitude of shared responsibility and facilitated by automation.

photo
Photo
devopsguru
devopsguru

VPC Peering in AWS lets your resources communicate with resources in a different VPC as though they were in the same network

photo
Photo
devopsguru
devopsguru

A well architected framework in AWS

photo
Text
devopsguru
devopsguru

Use AWS Organizations

To neatly organize your AWS Accounts. One account will become the master account where you can get consolidated billing. You can create policies, SSO, and get a host of other benefits quickly across all your accounts.

Chat
devopsguru
devopsguru

Error creating RDS cluster with CloudFormation

Jack:
Hi Juliet. What have you been up to?
Jill:
Hi Romeo, I was trying to create an Auroa cluster in RDS using CloudFormation. I ran into an error. But I found the solution. Let's see if you know what the error would have meant.
Romeo:
Sure, go ahead.
Juliet:
Just to make sure I could get it working correctly first, I did everything manually and then created the CF template based on my experience. So I created a cluster giving it a name, then I created two instances in the cluster - I think I gave them similar names too. Of course, I gave matching names for the parameter groups and subnet groups etc.
But when I tried to create the same environment using CF, I kept running into this error. "The requested DB Instance will be a member of a DB Cluster. Set database name for the DB Cluster." This error was thrown by the DBInstance resource. Do you know why?
Romeo:
Let me guess did you try to name the databases with the same convention as the cluster - hoping to have consistent names across everything?
Juliet:
You are on the right track, yes.
Romeo:
While the AWS console lets you name the DBs in a cluster, the CF mechanism does not allow you to do it. You can use the DBClusterIdentifier in the DBCluster resource. Thats it. The database instance is tied to the cluster. The name is randomly created and managed by the cluster and cloud formation.
Juliet:
Go on.
Romeo:
Well just remove the DataBaseName or DBName or whatever property you are setting for the DBInstance and you should be fine.
Juliet:
That's right! Good job!
Text
devopsguru
devopsguru

S3 permissions across AWS Accounts

When all your S3 buckets are in the same account, you usually don’t have issues with access. But when you have several accounts (like a lot of companies have these days), you will run into issues. This is mostly because of the way S3 has evolved.

Let me break that down. S3 is a gigantic repository of name-value pairs. The blobs of data that make up the ‘value’ part of the name-value pair can be very very tiny or very very large. Each of these blobs is called an object. These objects are placed in buckets. By default each S3 account is allowed 100 buckets.

[[MORE]]

Each bucket has a globally unique bucket name. If you create a bucket, no one else can create a bucket with the same name. Don’t worry though, if you delete the bucket, then they can create one with that name. Also note that the buckets exist in regions although the console says it is a global resource.

The bucket is owned by the account in which the bucket resides. By ‘account’ I mean the ‘root’ user of the account. Each object in the bucket is owned by the account that put the object in the bucket. If you have only one AWS account, this is straightforward. Nothing to worry about. But if you have multiple accounts and put the object to the bucket from another account using the CLI, this can soon turn messy. Especially if you have thousands of files.

You can find explanations of the permission model in AWS documents here and here. Essentially this flowchart is as complex as it gets.

The ultimate authority on permission to the object is the object itself. If you decide to give permissions to thousands of objects in a bucket, you have to do it on each individual object. This will take forever. So the best practice is to always create a role for each application that will need access to objects in a bucket, and give that role permissions on the object when it is placed there. Thereafter, have the user assume that role to access the objects.

Bada bing. Easy as that.

Text
devopsguru
devopsguru

Always enforce SSE for S3 uploads

The least we can do is Transparent Disk Encryption. The easiest way to do this to set the default encryption to AES-256 when creating the bucket (or later in the properties.)

Once the bucket is created, go to Permissions/Bucket-Policy and create a bucket policy similar to this…

[[MORE]]
{  
   "Version": "2012-10-17",  
   "Id": "PutObjPolicy",  
   "Statement": [  
       {  
           "Sid": "DenyIncorrectEncryptionHeader",  
           "Effect": "Deny",  
           "Principal": "*",  
           "Action": "s3:PutObject",  
           "Resource": "arn:aws:s3:::BUCKET-NAME-HERE/*",  
           "Condition": {  
               "StringNotEquals": {  
                   "s3:x-amz-server-side-encryption": "AES256"  
               }  
           }  
       },  
       {  
           "Sid": "DenyUnEncryptedObjectUploads",  
           "Effect": "Deny",  
           "Principal": "*",  
           "Action": "s3:PutObject",  
           "Resource": "arn:aws:s3:::BUCKET-NAME-HERE/*",  
           "Condition": {  
               "Null": {  
                   "s3:x-amz-server-side-encryption": "true"  
               }  
           }  
       }  
   ]  
}  

Just replace the bucket name.

Important:

After that, everytime you upload files make sure you use the –sse flag to tell S3 to encrypt the file. Otherwise the upload will fail!

Text
devopsguru
devopsguru

Creating a simple database in RDS

As long as the database is going to be simple, it is a simple process. All the same we will take some precautions to allow for future proofing. If you look at the navigation page on RDS you will see Instances and Clusters right on top under Dashboard. This is because those two are the ones we will use frequently when managing RDS. However, the section below contains some important elements we must configure to have a good overall experience in the long run.

Basically if you use the default Parameter groups you will not be able to modify them. You want the ability to change things when you feel like it. So first lets create a couple of parameter groups. You need one for the cluster and one for the database.

image
image

Mysql is the simplest DB that RDS provides. If you are familiar with mysql, you can use the same client and commands for aurora. In this example we will use aurora. This way we can migrate to aurora-serverless for infrequently used dbs where they make sense (when aurora-serverless comes out of preview and into general use.)

[[MORE]]

The default aurora parameter group is for postgres as you can see below. Pick the plain aurora, which is what we will use for mysql.

image

We have to create two parameter groups like I said above. Lets first create the one for the cluster.

image

Do the same with “DB Parameter Group.”

image

Now you have two parameter groups.

image

Some DB engines offer additional features that make it easier to manage data and databases, and to provide additional security for your database. Amazon RDS uses option groups to enable and configure these features. An option group can specify features, called options, that are available for a particular Amazon RDS DB instance. Options can have settings that specify how the option works. When you associate a DB instance with an option group, the specified options and option settings are enabled for that DB instance.

Amazon RDS supports options for Mysql, MS Sql server, Oracle, and Postgres. But not for aurora. Which is good. They already have it all set up well. Since we are creating an aurora db, we don’t have to create an option group. But if you were creating another database, make sure you create an option group so you are not stuck with the default.

image

The next important thing after option groups and parameter groups is the subnet groups. I assume you have Data subnets that you dedicate to RDS and EC2 instances with dbs on them. This allows us to have separate subnets where traffic from the Internet is not allowed. If we don’t create a db subnet groups (which is basically a grouping of the data subnets), then RDS will create one for you consisting of all the subnets in the VPC. This is not what we want. Of course you can easily modify the group and remove the unnecessary subnets. But you will not be able to change the name of the group. That’s the only disadvantage. But I like to name it, so you do the same. :)

image

If you notice, I was not very creative with the subnet name. I just call it the VPC-ID plus DB-Subnet. It is better than the default. You can name it what you like. You should not need to create multiple db subnets in a simple environment. So depending on your needs name it what you will.

image

Don’t click the “Add all the subnets”. Instead pick the subnets you want.

image

And before we create the database, lets create a security group that allows access from the EC2 instance to the RDS instance. If we don’t create one and select it, the RDS will create one by itself, and add your IP as the source.

image

Now it is finally time to create an instance. We will create a multi AZ cluster. Just follow these steps.

image
image
image

The database name cannot contain dashes.

image
image
image
image
image

Since we selected multi AZ, a cluster is automatically created.

image

Since we have a cluster, it is better to use that endpoint.

You need the mysql client to access the DB from the EC2 instance. On RHEL that is easy. Just ssh into the EC2 instance and then

sudo yum install mysql -y

Then you can connect to the db with

mysql -h the-end-point -u master -D the_data_base_name -p

If you want an easier name for the cluster end point, just create a route 53 entry. Use the database name you used when creating the instance. And use the user name you gave.

Text
devopsguru
devopsguru

Automate the toil

  • Manager: You need to automate the manual work you waste your time on. Restacking the AMIs for example.
  • Devops engineer: (who has been trying for months) I am so close. I should be done automating by end of this quarter.

One quarter later… rinse and repeat.

Quote
devopsguru
devopsguru
Big data is anything that crashes Excel.
Borat
Text
devopsguru
devopsguru

Setting up a quick slack inviter using zappa

After deploying the website to have your slack members self-invite to your slack account, this is what the site will look like. 

image

Some background

Zappa creates this web application in your AWS account using Lambda and API gateway. It does it quickly. So with the free tier, you pay next to nothing, and saves you time.

[[MORE]]

What you need

  1. The code is here. https://github.com/Miserlou/zappa-slack-inviter. Thank you Rich Jones. You will need to clone it.
  2. An AWS account. You will need to decide which region to put the lambda function and the API Gateway resource in. I chose us-east-1 because that is where most of the cutting edge technologies are introduced first.
  3. As a corollary to the above you will need the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  4. And finally you will need the slack token. You can generate your slack token here. You will need your slack domain: something like mygroup.slack.com.

Steps to configure and deploy

Go to your development directory and run these commands.

git clone https://github.com/Miserlou/zappa-slack-inviter
cd zappa-slack-inviter
virtualenv env && source env/bin/activate && pip install -r requirements.txt

This should set up the correct python env with the required packages. Note zappa does not work with python3, only with python2.7.

You will find the file called app.py. Edit it and replace these three lines with your slack group related values.

image

Now run

zappa init

You will get these prompts. I chose ‘production’ as the environment. But that lead to a very long name for the lambda function as you will see below. So I changed it to ‘prod’.

image


Note, I picked a sensible name for the S3 bucket. This command created a json file called “zappa_settings.json”.

Next run

zappa deploy

This should be the last command. But it did not work correctly because I had not set the AWS secret values.

image

Pasting the actual error here for those who will search for the text of the error.

ClientError: An error occurred (InvalidClientTokenId) when calling the CreateRole operation: The security token included in the request is invalid.

The error is from AWS. zappa uses the AWS credentials.

export AWS_ACCESS_KEY_ID=AKTHISISAFAKEACCESSKEY
export AWS_SECRET_ACCESS_KEY=ThisIs-aFakeSecretAccessKeyUseYourOwnKey
export AWS_REGION=us-east-1

I ran the command again. Now I got a slightly different error.

ClientError: An error occurred (InvalidParameterValueException) when calling the CreateFunction operation: The role defined for the function cannot be assumed by Lambda.

image

This error also appears to be from AWS. Fortunately there is nothing you need to do about this. I google it and found that this error was transient. It disappeared after a while. So I tried 10 minutes later, and the error did not appear. See here.

image

Yes. That error is gone. But now there is an easier to fix error. 

ClientError: An error occurred (ValidationException) when calling the PutRule operation: 1 validation error detected: Value ‘zappa-slack-inv-production-zappa-keep-warm-handler.keep_warm_callback’ at 'name’ failed to satisfy constraint: Member must have length less than or equal to 64

image

The name is too long. :)

That was easy to fix. I edited the “zappa_settings.json” file and changed the stage name from ‘production’ to ‘prod’. 

This time the deploy worked!

image

Nice!

I don’t remember how it gave me the URL to go to for people to sign up. But if it does not show it for you, just run 

zappa update prod

This will replace the lambda and give you your URL.

Text
devopsguru
devopsguru

DevOps according to Puppet

I recently read an excellent ebook from puppet. https://puppet.com/resources/ebook/devops-and-you-advice-for-building-your-career

If you don’t have the time to read the whole ebook, here is a summary that highlights the most important points.


What is DevOps?

  • DevOps is a set of principles aimed at building culture and processes to help teams work more efficiently and deliver better software faster. 
  • DevOps isn’t a prescribed workflow or toolset, but a way of thinking and working that can be applied to any organization.
  • As the name suggests, DevOps brings development and operations together — and ideally, testing, QA, product, security and management as well — so together they can produce better results for the business, with fewer headaches and surprises.

Image courtesy puppet.com

DevOps Practices

DevOps can look quite different from one organization to the next, yet certain DevOps practices are fundamental:

  • managing infrastructure as code
  • automating common processes
  • creating cross-functional teams and breaking down silos within teams
  • creating visibility and collaboration across teams
  • increasing visibility into metrics and work in progress
  • utilizing version control
  • deploying smaller changes more often

Who should practice DevOps?

DevOps shouldn’t be limited to a specific team (like the ops team) or role (such as DevOps engineers). Everyone involved with the company’s software should be aware of and practicing DevOps.

Whether your title — or the title of someone you’re hiring — is DevOps engineer, site reliability engineer, infrastructure developer, release engineer or something else, what really matters is that you’re all on board with DevOps.


Why should you embrace DevOps?

  • You don’t need to be worried about being bored or not learning new things. Since automation is fundamental to DevOps, the common and routine tasks that bore you will be automated, leaving you more time for solving higher-level problems and innovation.
  • There is a good chance you will be fighting fewer fires and putting them out faster.
  • Using DevOps principles you will have a better change failure rate and you will recover much faster.

What skills do you need to succeed in DevOps?

  1. A desire to learn and improve things.
  2. Communication and collaboration.
  3. Tech chops and tools: ops people need to code.
  4. Caring about the big picture.

How do I get started and get ahead?

  1. Be curious and keep learning
  2. Start by focusing on one thing
  3. Don’t be afraid to try or break things
  4. Learn a programming language
  5. Find people to learn from and ask questions
  6. Attend meetups and conferences
  7. Develop a broad skill set over time.

Text
devopsguru
devopsguru

Do we really need the organization validation client for chef?

It looks like you may not need it after all. It actually saves some trouble. As long as you are bootstrapping from your laptop.

If you use the validator.pem file you need to have two extra config parameters in your knife.rb. validation_client_name and validation_key. In addition you need to keep the validation_key around in an S3 bucket or someplace where it will be shared from.

If you are going to run bootstrap from your laptop you can just use your client_key (your pem file) and your node_name (your username). The only additional restriction is that when you bootstrap you need to add a node name with -N. This is actually good because you an specify your node name. If you don’t, (and you are using the organization validator), it takes the public DNS name from the EC2 instance. Which will not make sense if you are managing pets in the early stages of your development.

An added advantage is that the client_key is not stored on the new node’s /etc/chef/validation.pem. So you don’t have to delete it.

Of course, in later stages of the company’s development, when you start autoscaling and have only cattle, then the node name does not matter. Your auto scaler will need its own validation key. At that point it may make sense to use an org validator.

Text
devopsguru
devopsguru

chef generate app vs chef generate cookbook vs chef generate repo

They all generate similar files. But they are slightly different. 

In the old days we used to have one git repo with directories in them called cookbooks, databags, etc. The cookbooks folder contained all the cookbooks that our company needed. All in one git repository. I don’t believe we needed to generate anything. But if you wanted to generate a new cookbook in your repository, you would do chef generate cookbook

Then along came berkshelf and promoted the concept of one git repository for each cookbook. This also made it easier to work with test kitchen to automate testing. To accommodate this paradigm, we introduced chef generate repo. If you want to follow this methodology, just use chef generate repo and it will generate the repository, with a sample cookbook inside it.

Chef generate app supports a hybrid approach where it creates a repo with a structure to have multiple cookbooks like in the old days. Then you can create a few application specific cookbooks into the same repo. You would continue to create separate chef repos using chef generate cookbook for commonly shared cookbooks company wide.

Text
devopsguru
devopsguru

Quick steps to generate SSL cert with venafi

On the folder you want to create it at right click and Add/Certificates/Certificate.

In the “Certificate Name” and “Common Name” enter the fqdn. Click Save.

Status will be Ok. Click “Renew now”. In a couple of minutes, the status will be back to OK after cycling through a few states. The expiration date will now be filled.

In the Settings tab click Download/Certificate. Click download.

If you checkbox “Include Private Key” then the private key will also be included. As long as you use base64 encoding you can simply copy it out of the file.

You can check box “include root chain” to get the chain. The descriptions will help you determine which is the cert and which is the chain. The chain is usually in the middle of the key and the cert.

Text
devopsguru
devopsguru

Filter using tag’s key value pair in AWS

For Amazon’s only example in the documentation, look in this page http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html under the section “To describe all instances with a Purpose=test tag”


An example that works. To get the subnets which have a tag called “companyname:vpc:component:type” with the value “bastion”, and a vpc for example “vpc-31ba1234″ use this command.

aws –profile autotke-dev ec2 describe-subnets –filters Name=vpc-id,Values=vpc-31ba1234  –output json –filters Name=tag:‘intuit:vpc:component:type’,Values=bastion


To get only the subnet IDs of these subnets, run

aws –profile autotke-dev ec2 describe-subnets –filters Name=vpc-id,Values=vpc-31ba1234  –output json –filters Name=tag:'intuit:vpc:component:type’,Values=bastion –query 'Subnets[].SubnetId’

Text
devopsguru
devopsguru

Get the date on the SSL certificate using the command line

Usually when you want to check the date on the SSL certificate, you just load the web page in a browser (chrome) and click on the green lock and look at the date. But there are times when you don’t have a gui for the app and still use SSL. Like if you are using websockets, or when you just want to quickly check it with the command line.

Here are two variations on the command line - each with different levels of information, but both containing the expiration date nonetheless.

echo | openssl s_client -showcerts -servername www.example.com -connect www.example.com:443 2>/dev/null | openssl x509 -inform pem -noout -text

curl -vvI https://www.example.com

Link
devopsguru
devopsguru

Forward ssh key agent into container · Issue #6396 · docker/docker

Any company worth its salt has built private ruby gems and have them on github. When trying to port their application to docker, bundle install will fail. This thread posted above is about 2 years old. I have not scrolled down to the bottom yet. But if they have fixed it with ssh authentication forwarding, it certainly took a long time. If…