The Current Status of Food and Beverages Containing CBD in the UK

In a seemingly overnight turn of events, the European Commission has announced that food and beverage items containing cannabidiols (CBD) are now considered novel food items. Novel foods, which are…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




How to continuously deploy your Laravel application on AWS

There is something very powerful about automating even the little things. We sometimes think as automation as the simple maths of saving a given number of minutes/seconds by a given number of times a day but the compound effect is much greater.

As you will be setting up more automation for your deployments, you will run into and solve problems many times more critical than saving a few minutes a week.

Automating solves the problem of documentation once for all. Your deployment procedures will be turned into infrastructure-as-code and configuration-as-code and these are the best documentation. If you are using declarative frameworks like CloudFormation and Terraform, you will effectively commit in your code beautiful HCL, YAML or JSON files and Dockerfiles respectively describing the state of your infrastructure and configuration.

Automating forces you to enforce security. You will find that your previous manual procedures were relying on credentials or keys having more permissions than needed and that you still hadn’t created that deployment key like you said you would. You will not allow your automation scripts such permissive access and therefore you will finally setup the appropriate credentials and permissions for it which will make your infrastructure more secure as a result.

Automating is the only way you can grow your technical team efficiently. The deeper you automate, the more standard your processes are. You will replace your stray scripts by widely understood technologies like Docker, Terraform, Kubernetes, Consul etc and their design patterns. It means you can build and handover them to a new developer quickly. You can forget about them and get back to it months later with no brain freeze.

This is the procedure I use to deploy my clients’ Laravel applications on AWS. I hope this can be helpful to deploy yours. If your use case is more complex, I provide on-going support packages ranging from mentoring your developers up to hands-on building your application on AWS. Ping me at hi@getlionel.com

1. Setup CodeCommit and push your Laravel code

In the meantime, run these commands to login to the repository and publish your Laravel project:

Configure the authentication to CodeCommit by adding the AWS command line credential helper to Git config:

You can now push your code to CodeCommit.

2. Setup CodeBuild

In our CodeBuild project, we define the commands to be ran after CodeBuild has cloned our repository. Commands are similar to what you would write in bitbucket-pipelines.yml or gitlab-ci.yml: assume you can define your environment (operating system and pre-installed tools) and that you are in the root directory of your project.
Here we use an Ubuntu 14.04 with Docker, Python and Compose installed. We will install node, npm and gulp to compile our front-end assets.

Pre-build steps construct the Docker repository URL we will tag our images with. The build command is a simple docker build and the post-build command is one or more docker push. Ignore the last post-build command for now.

3. Setup CodePipeline and trigger ECS zero-downtime redeployment

Here is where we connect all the dots. So far we have a CodeCommit repository, a CodeBuild project and our ECS cluster running happily. We use a CodePipeline project to connect them all. It will look like this in your AWS console:

Every commit to the master branch of CodeCommit will trigger our CodePipeline project. The CodeBuild commands we defined will build our Docker images and push them to our ECR registries and finally the last step of the pipeline will redeploy our application.

For the Deploy step, CodePipeline relies on an images definition file. It is a JSON file we create in the post-build stage of CodeBuild that describes how our ECS cluster’s Task Definitions should be updated. Provided the cluster has enough capacity, ECS will spin up the new containers, wait for them to be reported healthy before shutting down the old ones, effectively achieving zero-downtime deployment.

This JSON file looks like this for our multiple-containers application and it can easily be built with bash and printf:

Note that this is why we marked our newly built Docker images with a different tag: ECS can now pick up the TaskDefinition update and force pulling the new images onto our container instances.

ECS will use the MinimumHealthyPercent and the MaximumPercent settings of your ECS Service and the current extra cluster capacity (Memory and CPU) to orchestrate the redeployment. If there is extra capacity for a new instance of your TaskDefinition and your application hasn’t reached the MaximumPercent allowed, then ECS will spin up another instance of your TaskDefinition and delete the older one, in that order. In the other case, it might swap them in the opposite order, creating downtime.

I hope it will help you to build a bit more automation in your deployments. I’m also looking forward to the next ECS feature release expected this month: automatic service discovery by DNS registration of your micro-services to Route53. If you’re the first to implement it, comment below!

Add a comment

Related posts:

He Said There Would Be Zumba

The 2 guys who were married are both Instagram influencers and YouTube stars from Brazil. I was looking forward to meeting my brother’s friends from South America. We were all going to stay with him…

Write content with Inbound In Mind

Writing a great articles for everyone is hard and highly impossible for any writer because every single person is unique and poses different idea and different impression on same topic. Writing a…

Buy Negative Google Reviews

Negative reviews have become a major part of the Internet. And Google is taking this to a new level by showing. How many Negative reviews exist for different websites. Negative reviews are a problem…