An easy-to-use web service for compiling DSDL

I have completed the updates for production. It now runs off the secure port (443) with a Let’s Encrypt certificate and all services in Docker containers. GitHub Actions builds and deploys changes to production from the cicd branch (for the moment).

It would be good if someone could verify that the application works as expected from the new secure port URL.

Are these changes complete and ready for production? It looks like the configuration settings are for testing. It might have been better to put them in a feature branch for review as I cannot merge the cicd branch into main until we resolve the conflicts with nginx.conf and prod.docker-compose.yml. Perhaps we should protect main to enforce pull requests. What do you think?

The application does not seem to be functioning correctly but I am not sure if the culprit is in the deployment configuration or in the application itself. These are the inputs I supplied to the application (where my_project.zip is the same I posted here earlier):

The expected output is an archive with four directories inside:

  • uavcan (the standard regulated namespace from GitHub)
  • reg (a non-standard regulated namespace from GitHub)
  • my_project (an unregulated namespace from the archive)
  • nunavut (the support library generated by Nunavut)

The archive I got contains only two namespaces: uavcan and reg. The code generated in these directories is, however, correct.

If I compile just my_project.zip, I get a file not found error:

When I try to compile just my_project.zip, I click “Submit” and nothing happens. No logs on the server either. I tried with different browsers and settings without success. It might be the my_project.zip file. The SHA256 checksum for it is: 68a1db86e9ee4591f13d1bcb282868602a8ff5bd67996bf96f8be218b9e5729c.

I am not sure either. Perhaps @bbworld1 could clarify.

Perhaps we should protect main to enforce pull requests. What do you think?

I agree, this is a good idea. Minio should probably have gone into a feature branch first.
I did test the (old) production configuration with minio and it works properly - I can upload project zips and get generated code back. Perhaps we can integrate it into the new cicd production configuration?

It looks like the configuration settings are for testing.

I may have missed something but the configuration looks valid to me. Is there some configuration that needs to be done to make it production ready (besides setting the right production urls and values)?

The application does not seem to be functioning correctly but I am not sure if the culprit is in the deployment configuration or in the application itself.

Is this a new deployment from main or from cicd? I recall the application worked earlier.
I don’t believe deploy on push to main is actually set up yet, and the minio port was tested working, which leads me to believe it was a change in the deployment configuration in the cicd branch - not sure what caused this yet.

@pavel.kirienko The error is caused by a deployment change. The cicd docker-compose configuration omitted the /tmp mount, which was needed as temporary working directories needed to be accessed by both Flask and Celery.

This issue has been fixed with the introduction of minio - everything now happens in Celery, and thus this mount is not needed anymore. That being said, we have to integrate the changes before this will work, so we should do that ASAP.

In the docker-compose.yml file, I see:

      - NS_MINIO_URL=minio:9000
      - NS_MINIO_RESULTS=http://localhost:8000/api/storage/results # Change
      - NS_MINIO_ACCESS_KEY=nunaweb
      - NS_MINIO_SECRET_KEY=supersecurepassword

What should these be set to?

I will work on merging the cicd branch into main today. I will also protect the main branch and set the auto-deployment to work on push to main.

I have rebased cicd branch on top of the main branch and resolved the file conflicts. It now deploys to production on pushes to main. I still need modify the GH Actions build to set the minio password in the docker-compose.yml file. We don’t want to set in the file and checked into GH.

Could we please test this new deployment again? If minio is working, it should resolve the namespace issues @pavel.kirienko mentioned earlier.

Okay, yes, the zip is now handled properly but we are still missing nunavut/serialization. This is probably related to the application itself, after all. @bbworld1 when you run it locally, does it generate the support library correctly?

Fixed in https://github.com/UAVCAN/nunaweb/pull/13. Apologies for the oversight.

@clydej It appears that the plain HTTP version of the site at http://nunaweb.uavcan.org does not load correctly. Perhaps we should add a redirect?

This is something we still need to decide. The problem is that Let’s Encrypt uses port 80 to check and renew the security certificate for the SSL port. If you want port 80 to redirect to 443, then we will need schedule downtime for the site periodically (say, once a week) to free up the port for the certificate check. We do this for the uavcan.org website but I am not sure if we need it for this one too.

I think it’s likely a good idea. Users might be confused if they accidentally visit the http version of the site and get a “site down” message.

Hi folks, I’ve been watching this thread silently until now – nunaweb looks to be an exciting addition to the UAVCAN ecosystem!

I had a look at the repo and deduced that certbot is the Let’s Encrypt client of choice here. We can do one better and avoid any service downtime, by using the certbot ‘webroot’ mode, as follows:

  • invoke certbot with something like certbot certonly --webroot --webroot-path /path/to/my/webroot, which is an option designed to let your existing webserver serve the HTTP-01 challenge tokens

  • mount /path/to/my/webroot/.well-known/acme-challenge/ in nunaweb-proxy to a path like /run/acme-challenge (or whatever)

  • add this to nginx.conf:

    location /.well-known/acme-challenge/ {
        alias /run/acme-challenge/;
    }

It might look a bit odd to ‘pipe in’ the challenges from the host environment into the container, but given that the proxy container is claiming port 80 already it’s probably the simplest way.

And also: one would need to reload nginx in the container, so maybe run docker exec -it nunaweb-proxy nginx -s reload on the host after renewal. This too can be automated in certbot using a hook, although I’d have to read some docs to remember how exactly.

1 Like

When I visit http://nunaweb.uavcan.org from a completely clean browser (clean cache), I am redirected to HTTPS as expected. How do you reproduce this?

It must be a client-side redirect in your browser. Curl shows the port is closed.

$ curl http://nunaweb.uavcan.org
curl: (7) Failed to connect to nunaweb.uavcan.org port 80: Connection refused

I get a blank page or an “Unable to connect” message in my browsers.

I confirm that the application is now functioning correctly.

As we just discussed with Clyde off the forum, he will be working on this now. I believe this is the last item that is holding us from calling this thing production-ready. @bbworld1 can you confirm?

Thanks Edwin. It is using the ‘standalone’ mode at the moment but we can try the ‘webroot’ mode. I will start a new branch for this.

1 Like

@ebb I have implemented your suggestions in the port80 branch. It is pending review for merging into main. The --dry-run renew was successful and I have added a deploy-hook to reload nginx after renewal. We can see how it goes when it is deployed to production.

1 Like

I believe this is the last item that is holding us from calling this thing production-ready.

I believe so! Everything else is just extra improvements and maintainability fixes, such as adding tests and features. Once it’s fully deployed everything should be good to go :slight_smile:

I have merged the changes into main and confirmed the redirection works. The scheduled certbot check ran without error. I also ran it manually with the --dry-run option and it passed successfully (although it skipped the deploy-hook).

There is a pending PR to tighten up the security of the app which will affect the production environment. If this is approved, it can be merged into main and we can do a final health check on the app. I will leave it to @bbworld1 and @pavel.kirienko to approve these changes.

2 Likes