Setting Up a Project with CI/CD Using Amplify
In this tutorial, you are going to learn how to set up a project with CI/CD. We are going to use the Amplify console to automate our deployment.
Note: This article is a tutorial for developers who are familiar with the basics of Amplify. Do you want to learn how to accelerate the creation of your projects using Amplify 🚀? I recommend beginners to check out Nader Dabit's free course on egghead, or Amplify's 'Getting Started'.
To decrease your deployment risk, you need to increase your deployment frequency. Deploying more often with small incremental changes means:
- If there is a bug, there is less code surface that could've caused the bug.
- Your users get to enjoy a constant stream of new features.
- You and your team will feel confident deploying your software.
You can frequently deploy by using CI/CD, which stands for continuous integration and continuous deployment. It is the practice of automating your tests and your static analysis to ensure your code works and is properly formatted. Therefore, CI/CD depends on good test coverage.
In this article, you will see an example using static analysis with ESLint and Prettier, unit tests with RITEway, and functional tests with TestCafe. You will learn how to set up your project with these tools and how to automate them. Lastly, we will host the app using the Amplify console, which will run our tests before each deployment for us.
Static Analysis
If you want to code along, start by initializing a React app with Create React App. I will explicitly point out when you need a different configuration, e.g. if you are using React Native, Vue or Angular.
ESLint
ESLint is a JavaScript linting tool, which can find and fix problematic patterns or style guide violations in your code.
Or use npm install --save-dev
. In apps created without CRA, you will need to install eslint
, too.
If you use CRA, you can add the following code to the "eslintConfig"
key in package.json
. Otherwise, add it to your .eslintrc.json
.
React projects which are set up with CRA come with an ESLint configuration. If you'd use this setup outside of CRA you would extend "eslint:recommended"
instead of "react-app"
. Furthermore, you might want to set parserOptions
's ecmaVersion
to 2019
, configure the env
key and so on. Make sure you cover for the configuration that the React app plugin usually supplies.
Add a "lint"
script to your package.json
.
Using the --ignore-path
flag, we can re-use our .gitignore
to make sure we only lint files that we wrote.
Prettier
Prettier is a viral tool used to format your code.
eslint-plugin-prettier
runs prettier
for you when you lint. eslint-config-prettier
will disable all the ESLint rules that are irrelevant because of Prettier. That means if you run the lint script (see below), or you have your editor configured to integrate ESLint, you won't see any errors for conflicting rules.
Extend your ESLint config.
Prettier's settings are configured using a file called .prettierrc
.
Add a "format"
script that fixes your code.
If ESLint finds zero errors, it prints out nothing. We add echo 'Lint complete.'
to verify that our lint script ran. --silent
(or -s
) suppresses some unnecessary output of the commands and keeps your console clean. Note that for NPM you have to pass on flags like --fix
using an extra --
.
Unit Tests
We are going to use RITEway for our unit tests because of its genius API. Note that RITEway does not work with React Native, because there is no good open-source mock for the React Native components (e.g. <View />
, <Text />
, etc.). If you'd like to use the RITEway API with Jest, which has a RN mock, try out RITEway-Jest.
RITEway
We install RITEway alongside some Babel plugins. We need to add these dependencies to transpile React and modern JavaScript code. Add a .babelrc
to configure these plugins.
If you use Gatsby, you will need some additional configuration.
Now, we need some code with tests. Delete index.css
, App.css
and logo.svg
as well as all references to these files in App.js
and index.js
. Afterwards, add a folder called sum/
within src/
and create two files (index.js
and sum.test.js
) in it.
We will create a simple sum
function, so we have some unit tests.
Here are the tests for sum
.
Change your <App />
component to use sum
to count the users clicks.
And add tests for the <App />
component.
Lastly, we need to create a file in src/
called index.test.js
. In it import all other test files that contain unit tests.
Importing all tests in a test file allows us to be selective about which tests we run, which can be helpful as your project grows.
With these basic tests ready, we can configure package.json
.
If you'd rather use a regex to decide which tests to run, you can call "test"
with 'src/**/*.test.js'
. Currently, we have to run yarn unit-tests
any time we want to run our tests. We can automate this process using watch.
watch
We install watch to have a script running, which will run our tests every time a file changes. (Note: watch might need an additional setup to work on Windows.)
We additionally add tap-nirvana
which colors our test output, making it easier to read. Only our "watch"
script will have colored output, because we don't care about that for our CI/CD processes.
Using yarn watch
all our tests run and our code gets formatted each time we hit save.
Bonus: Debugging
Sometimes you get an error, and you'd like to use the debugger
statement in your tests. We can streamline this process by adding a debug
script.
If you run this script, you can open Chrome and visit chrome://inspect
to jump to your break point.
Functional Tests
For our functional tests, we will use TestCafe. If you prefer Cypress, that's okay, too. They are both great. I chose TestCafe, since it supports several browsers.
TestCafe supplies two global variables to its tests: fixture
and test
. To avoid ESLint yelling at us about these variables being undefined
, we add the eslint-plugin-testcafe
and configure it in our .eslintrc.json
.
Consequently, write a test in src/functional-tests/index.js
.
Lastly, add a functional-tests
script to your package.json
.
We run Chrome in headless mode for two reasons.
- The first reason is to speed up the tests during the development by not painting anything. Rendering usually takes the most time while running functional tests. Keep in mind: for small applications, this might be overkill. Another neat trick you can do as your application grows is to run your functional tests in parallel. Make sure your tests are sufficiently isolated from each other so that they can run in parallel and random order.
- The second reason is headless mode allows our tests to run in the Amplify console. As far as I know, you can't run a real Chrome instance in the console that paints to the DOM.
Tip: You can use TestCafe's meta tags to differentiate between headless tests for development & pre-deployment and smoke tests that actually render something for post-deployment.
If you run your functional tests, make sure to run yarn start
first.
CI/CD
We can add a "validate"
script that checks if everything works. You could manually run this script any time you commit new code to Git.
Notice the --app
and the --app-init-delay
flags. The former runs yarn start
before our tests and terminates it when they finish. The ladder delays the beginning of the tests for 4 seconds. We need the delay because React apps usually take some time to load. If the tests run too early, they fail because they can't find the DOM elements.
We want to automate this script to run when we commit new files.
Husky
Husky can automatically run yarn
or npm
commands by hooking into your Git commits. (You can take a look at your Git hooks under .git/hooks/
.)
All we need to do now is add a "husky"
key to package.json
that calls "validate"
using the "pre-commit"
hook.
The script causes "validate"
to run any time you commit to Git.
Amplify Console
When you deploy using the Amplify console, you can get it to run the validate script before deploying. You might ask yourself: "Why would we want to run it again? We just ran it with the "precommit"
hook." We rerun it to avoid the "It works on my end 🤷🏻♂️" problem. The tests might have passed on your machine, but they need to pass after they've been deployed, too. For example, what if your server runs on a different Node version? Or maybe you forgot to add some environment variables?
The Amplify console makes deployment easy by connecting to your GitHub repo. In the console click on "Connect app", then choose GitHub and select your repository and branch. Afterwards, jump into your "Build settings" and click "edit".
Let's break this down.
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
loads Chrome so we can install it using yum.yum install -y ./google-chrome-stable_current_*.rpm
installs Chrome.yarn install
installs our node modules.yarn validate
runs our validate script
Since we added these scripts to the preBuild
commands
section, they will run before we build the app. If any of the tests fail, the deploy is being aborted. If there was already a passing build deployed, the console will automatically perform a rollback to it.
That's it! 🚀 You can now deploy your projects with confidence.
If you liked this tutorial, you might want to read about "Multiple Environments with AWS Amplify" because effectively collaborating in teams is an essential skill.
Summary
Here is our final package.json
:
We used ESLint with Prettier to format our code. We wrote unit tests with RITEway and created a "watch"
script to test our code every time save. Furthermore, we added functional tests with TestCafe. We set up Husky to "validate"
our code and used the Amplify console to do the same before we deploy.