How We Overcome Localhost Mapping Issues While End-To-End Testing with Cypress
Learn How We Run Tests on Our Localhost Blog Sub-Domain Wildcards Using Dnsmasq and Integrate Them into CI/CD with Github Actions
Do you know what happens when you need to run Cypress end-to-end tests on localhost subdomain wildcards at *.app.localhost
? You will likely run into DNS resolution issues since the requests for those domains have yet to be attributed to an address, such as the localhost IP address (127.0.0.1).
Internally we redirect all development blogs to *.app.localhost
. If we want to run tests on any of the possible blogs we might have, we would need suitable localhost mapping.
The solution to this was to utilise the DNS forwarder Dnsmasq which allows us to map our wildcard to localhost for use in local testing, and externally as part of automating testing in a CI/CD pipeline with the help of GitHub Actions.
Let's explore how we do this.
The Problem
I was tasked with setting up end-to-end testing using the testing framework Cypress for what was at the time, our brand new repository for the new and improved Hashnode blogs. This would allow us to write tests that increase the confidence we have in the code we write.
I started by installing the necessary packages and wrote some passing tests. Everything looked good.
So what was the problem?
The reviewer of the related pull request was unable to get the tests to run. After running the test script, the terminal output would not move forward beyond the following point.
$ yarn test:e2e
yarn run v1.22.18
$ start-server-and-test 'yarn dev' http://testing-main.app.localhost 'yarn cy:run'
1: starting server using command "yarn dev"
and when url "[ 'http://testing-main.app.localhost' ]" is responding with HTTP status code 200
running tests using command "yarn cy:run"
$ next -p 80
ready - started server on 0.0.0.0:80, url: http://localhost:80
event - compiled successfully in 1167 ms (265 modules)
This script simply starts up the dev server before Cypress verifies the specified base url is responding and the tests are run. The next step in the output here would be Cypress starting up the tests so we were never making it to this point.
For a little more background information, I set the url http://testing-main.app.localhost
as the base url in our Cypress config file. This was a local test blog used in end-to-end testing at the time. Locally our test blogs are redirected to *.app.localhost
.
Adding this url to the Cypress config baseUrl
means that you will see a warning inside the Cypress test runner if it's unable to verify the server is running.
There was no issue with the server and the url could be visited successfully in the browser. The problem is that it wasn't able to associate the request to this url http://testing-main.app.localhost
as a request to localhost under testing by default and without this, we can't run tests on this local blog.
We needed a solution for wildcard domain resolution under testing when using our local blog development domains.
What exactly is localhost?
We wanted to use the localhost address for our wildcard, but what it localhost?
Localhost is the standard domain name that is given to the loopback address. The loopback address is another name for your local computer address and it's just a special reserved IP address. This address is at 127.0.0.1
.
Whenever you send some traffic to this loopback address it will be handled by your own system. This is how we develop websites and apps without needing to utilise the internet.
So how does this relate to our issue?
By default, my machine doesn't know to resolve http://testing-main.app.localhost
to the localhost address under testing circumstances.
Custom localhost sub-domains like this would have to be mapped to an address, like our localhost address 127.0.0.1
before we could properly start testing on them.
Solutions
The first solution I came across was simple to setup and allowed tests to run on the url http://testing-main.app.localhost
, but it didn't provide the scalability we were looking for long term. The second option was preferred and is how we currently handle this problem.
Requirements
A good solution would support moving away from reliance on maintaining local test accounts and instead allow us to have proper test setup and teardown. An example of having proper test setup for us would be to create a brand new publication at the start of a test, and then delete it once our tests are done. Doing this would help ensure less flaky tests that are not at risk of braking from small changes to test accounts.
Let's take a look at both solutions.
Unscalable Solution - Update /etc/hosts
The /etc/hosts
file on your machine helps in associating IP address to hostnames and bypassing DNS resolution. It comprises of pairs, addresses and hostnames and is under the control of the system user. You can access this file yourself if you type the path name into your browser search. If you add an entry here, your machine will know how to resolve the specified hostname before DNS is referenced.
Don't remove the default content of this file unless you intend on breaking your localhost setup.
We can add an entry here that would allow us to attribute our current test domain to the localhost address. But this solution has one major drawback.
We don't want to have to keep editing this file to add a new entry whenever we need to test on a new hostname that proper test setup might cause. The /etc/hosts
file doesn't support wildcard entries. This means we would have to find a way to also add each individual new publication hostname created from the test to the /etc/hosts
file and it would probably never be required again once the publication is torn down.
This just isn't a practical solution in our case, but let's see how this would work.
Navigate into the /etc/hosts
file.
sudo nano /etc/hosts
Then add the following entry to the list to attribute the test blog to the localhost address and save it.
127.0.0.1 testing-main.app.localhost
The full file now looks something like this.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
127.0.0.1 testing-main.app.localhost
255.255.255.255 broadcasthost
::1 localhost
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
This was a temporary solution enough to ensure the tests would run on the machine of the dev running them, as long as they were contained to testing-main.app.localhost
of course.
Preferred solution - Dnsmasq
It would be much better if we could have one top level entry that handles all possible test blogs we might want to access. Instead of restricting us to testing on testing-main.app.localhost
, we can set it up to run tests on any .app.localhost
domain instead.
This was possible using Dnsmasq.
Dnsmasq is a DNS forwarder that would allow us to match our wildcard entry to the localhost address. With this solution, we don't need to edit our /etc/hosts
file at all. We will tell our machine to use the Dnsmasq server for its DNS queries relating to our wildcard.
For transparency, my system is the M1 Mac, macOS Big Sur Version 11.5.1. Setup may vary across different different operating systems.
1) Install Dnsmasq
brew install dnsmasq
2) Create a resolver directory
This is where we will add our resolver for the wildcard we need to support. With this resolver in place, queries to this wildcard will be directed to the localhost address we will specify by the DNS resolver.
sudo mkdir /etc/resolver
3) Create the resolver file that will handle our wildcard
You don't have to add the leading .
in front of app
, it will just be ignored.
sudo touch /etc/resolver/app.localhost
4) Pipe the localhost IP address 127.0.0.1
into the resolver file
echo nameserver 127.0.0.1 | sudo tee -a /etc/resolver/app.localhost
5) Update Dnsmasq config file
Then we need to update the config file for Dnsmasq. There are many options with description in this file commented out which you can see if you enter the file that provide customisation for your desired setup. We will just add the following address which we want to force to localhost.
echo 'address=/app.localhost/127.0.0.1' > $(brew --prefix)/etc/dnsmasq.conf
6) Restart Dnsmasq
Restart the dnsmsaq server when you make changes the config file. We have to run Dnsmasq as root because it runs on a privileged port (port 53) by default.
sudo brew services restart dnsmasq
That's the setup done. You can start verifying the setup by checking the DNS configuration.
scutil --dns
The new app.localhost
resolver is listed.
resolver #9
domain : app.localhost
nameserver[0] : 127.0.0.1
flags : Request A records, Request AAAA records
reach : 0x00030002 (Reachable,Local Address,Directly Reachable Address)
Performing a DNS lookup should now return an answer.
$ dig one.app.localhost @127.0.0.1
; <<>> DiG 9.10.6 <<>> one.app.localhost @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58597
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;one.app.localhost. IN A
;; ANSWER SECTION:
one.app.localhost. 0 IN A 127.0.0.1
Finally, try pinging one of these wildcards to confirm a response.
$ ping one.app.localhost
PING one.app.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.120 ms
^C
That's it! Now our tests were free to run on any .app.localhost
blog since the wildcard is resolving to localhost.
Integrate tests relying on Dnsmasq into CI/CD pipeline
We've just worked out the local DNS setup using Dnsmasq, but we can take this one step further. Integrating tests into a continuous delivery (CI/CD) pipeline for test automation is an important part of maintaining code reliability and quality.
We can achieve this using GitHub Actions.
What is GitHub Actions?
GitHub actions is a CI/CD platform that can help in automating our workflow. We can use this platform to do things like run automated checks on code like building, linting or testing when we perform certain Git related actions.
There are workflow usage limits associated with GitHub Actions so do check out their documentation for more information on that.
Creating the workflow
First create a workflow file for the repository .github/workflow/main.yml
. Here we need to add all the steps required in adding a Dnsmasq setup to the runner that our GitHub Actions job runs in. The runner is just a fresh virtual machine that runs the workflow. In the following example, Ubuntu Linux will be the virtual machine of choice.
We'll give the workflow a name called Cypress Tests
and include its jobs steps required in fulfilling it. Ensuring existing tests are passing when attempting to push new code to production is essential so we want to automate the running of these tests when we try pull requesting to certain branches. The branches in this case are the preview and production branches, development
and main
.
The full .github/workflow/main.yml
file for handling this workflow looks like this:
name: Cypress Tests
on:
pull_request:
branches:
- development
- main
jobs:
cypress-run:
runs-on: ubuntu-latest
steps:
- name: Disable systemd-resolve
run: |
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
sudo systemctl mask systemd-resolved
sudo unlink /etc/resolv.conf
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf
- name: Install dnsmasq
run: |
sudo apt-get update
sudo apt-get install -y dnsmasq
- name: Configure dnsmasq
run: |
sudo mkdir /etc/resolver
sudo touch /etc/resolver/app.localhost
echo nameserver 127.0.0.1 | sudo tee -a /etc/resolver/app.localhost
sudo touch /etc/dnsmasq.conf
echo port=53 | sudo tee -a /etc/dnsmasq.conf
echo listen-address=127.0.0.1 | sudo tee -a /etc/dnsmasq.conf
echo address=/app.localhost/127.0.0.1 | sudo tee -a /etc/dnsmasq.conf
sudo sed -i '1s/^/nameserver 127.0.0.1\n/' /etc/resolv.conf
sudo systemctl restart dnsmasq
- name: Checkout
uses: actions/checkout@v2
- name: Cypress run
uses: cypress-io/github-action@v2
with:
start: yarn test:e2eci
It's possible the above workflow can be simplified a little and a couple of steps could be removed, but this is what is currently working for us. Let's break it down based on the job steps.
1) Disable systemd-resolved
On newer version of Ubuntu, systemd-resolved
is used as caching DNS resolver. By default it runs on port 53 which is the same port that Dnsmasq also runs on. We need to disable this service to avoid this port clashing problem. We also set the default DNS server to be the public Google DNS server at the IP address 8.8.8.8.
2) Install Dnsmasq
Now we can download package information from all configured sources using apt-get package repository and install Dnsmasq.
3) Configure Dnsmasq
Here we run the required steps to setup Dnsmasq. It will be running inside the ubuntu latest
virtual environment. We add the localhost nameserver 127.0.0.1
to the top of the /etc/resolv.conf
file which is responsible for defining your system's DNS resolver. It differs from the etc/hosts
file in that /etc/resolv.conf
resolves nameservers in order of preference while the other will override these nameservers through the forced mapping to IP addresses. Again, it's possible this setup run could be simplified and we are still refining this process.
4) Checkout
Check-out the repository so our workflow can access it.
5) Cypress run
The final step is to run the test script. You can verify everything is working as expected from the Actions tab of the repo in GitHub. If you view the logs from the job run, you should see the output of Cypress starting up testing during that step.
Now when I attempt a pull request to the specified branches, GitHub Actions will do its thing and run our tests automatically.
If you want to test out your workflow you can always set the workflow to trigger on a remote push instead. Ideally we would like to bring the time for the run down, but that's work for another day.
Wrapping up
Those were some of the problems we came across while testing our blogs and how we managed to resolve them! If you have any questions or concerns, feel free to get in touch with me on Twitter.
Also a quick shoutout to Sandro Volpicella for his contribution to this work ๐.
If you'd like to help me solve interesting problems like this, take a look at our Careers Page and you could writing our next engineering article.
Until next time ๐