Major Updates to NCM

We’ve been working hard on some major changes during the past several months, and today we’re releasing a beta website to preview what’s coming soon to NCM:

New Features

  1. More Engines. We’ve simplified how engines are distributed throughout NCM’s infrastructure to make it easier to add new engines.
  2. More Lc0 Networks. We’ve improved support for third party lc0 networks such as the much requested Sergio-V and jhorthos networks.
  3. Stoppable Calculations. Pro users can press the “Stop” button to send a stop command to the running engine and get an immediate result.
  4. Real-time Calculation Display. Pro users see moves output in real-time as the engine progresses in depth.
  5. Free Access to All Engines. Five second calculations for all engines running on single core CPU hardware are now free to all users.

Important Notes

The Beta website uses a separate and temporary user database. You’ll need register a new account on the beta site to access all of the new features. We will erase the beta website and all beta website accounts after we officially deploy to production.

The Beta website only uses single CPU core hardware. At this time the beta website performs calculations on single core CPU hardware regardless of which type of hardware is selected.

Technical Details

Next Chess Move has grown considerably in recent years, and due to NCM’s particular way of handling HTTP requests, it has started to show signs of outgrowing what Ruby on Rails can offer out of the box. So we rewrote NCM in Elixir.

In October of 2020 we completed rewriting the software that powers our backend servers – the servers responsible for running chess engines. We added a thin proxy layer so that the old Ruby on Rails frontends can communicate with the new Elixir backends without the need for any frontend code modifications.

The beta site released today includes the rewritten (Phoenix+Elixir) frontend web servers which communicate directly with the backends to provide the new features.

Please let us know what you think!

Please give the beta a try and let us know what you think. Either post a reply here, contact me directly at, or open up a support ticket. Any and all feedback – both positive and (especially) negative – is welcome and encouraged!


I have to pay
Again for use ncm beta :pensive:?

1 Like

Hi Cesar12! The beta site is totally free. It uses an entirely separate database so you’d need to create a new (temporary) account to use all of the features, but you’ll never be charged.

1 Like

But in beta i can’t use rtx and 20 cores ,1 cpu have Bad evaluation…

That’s correct, at least at the moment. We haven’t hooked the beta website up to the “pro” hardware yet for a bunch of technical reasons.

There are a ton of changes we’re testing all at once: code rewrites (Ruby to Elixir), server OS changes (Ubunutu to Debian), provisioning and deployment automation rewrites (Terraform + Ansible), encrypted overlay networks spanning several hosting providers (Wireguard), server health / resource monitoring (Prometheus, ELK) etc.

So essentially right now we’re trying to get feedback and build confidence in the new system. Once we’re confident that things are production ready – hopefully soon – we’ll deploy the changes to so that you can use the new features on the 20-CPU core and RTX 2080 GPU hardware.


Any chance of you using RTX 3090 now that it is available?

Downloading lc0 networks is fast, Other engines works fine, Everything’s great! we need this to deploy now :wink:

I have a small suggestion for the “Other Engines” section. If possible, add Open Tal.

Maybe – right now the problem is that my dedicated server hosting providers do not have plans which include the RTX 3090. So I’d need to convince them to upgrade the GPUs in my servers. I’ve also been (lightly) exploring the colocation route where I’d be 100% responsible for the servers and rent the racks, but that’s kind of scary to me :slight_smile:

Ha, thank you! That downloading progress is new. Before, if the network diddn’t exist on the server, the spinner would just spin until the server downloaded a copy. Now it shows the server-side download progress. If the backend server doesn’t have the requested network, it checks to see if another NCM server has it, and if so, copies it from that server. Otherwise it pulls it from an AWS s3 bucket.

1 Like

Looking in to this now! It looks like Open Tai is basically Rodent III with an additional book? The official distribution ( looks like it put the Rodent III sources into a Visual Studio project and packaged up the resulting Windows executables. I need the engine to run on Linux for it to be used on NCM, but it doesn’t appear that too much has changed source code or build-wise. I’m working on a putting together a Linux build now.

I think I’ve got OpenTal 1.1 added to the beta site. Can you give it a try and let me know if it’s playing as you expect?

Here’s the github repo which shows everything that was done to get a Linux build:

1 Like

Thank you. This one should be exiting to play with once you hook it to 20CPU hardware

I almost forgot how about adding books (cerebellum) to stockfish?

I’ve made some progress on this, but it probably won’t be part of this release. I’m able to use the polyglot program to wrap a UCI engine, have it intercept the “go” commands, and output a book move if one is available. But doing so can introduce subtle changes to the engine IO which I’d rather not have to account for.

What looks more promising is to access the book directly:

The links to example code are broken, but I’ve found them on

Both of those are plain ANSI C programs which, after all these years, compile and run perfectly without any tooling aside from the compiler. I miss the days :slight_smile:

So eventually we should be able to use the above code to explicitly query any bin/polyglot book before even starting an engine.


Damnit chendry here I am debating whether or not to upgrade and you throw this legendary update in. Excellent work my friend this looks fantastic :ok_hand::clap::clap::clap::clap:

Ha! Hi Brad, and thank you for the vote of confidence :slight_smile: So far the only problems have been relatively minor. Hoping to get this out to production soon!

1 Like

I’m waiting 15-20 seconds for a 135 MB network to download every time I click “Calculate.” On the actual site Lc0 begins calculating almost immediately.

Aye, yes. Here’s why that’s happening. Whenever you click Calculate, the request gets routed to a backend server, and, if the particular backend server your request landed on doesn’t have the network, it downloads it. In staging there are currently 10 backend servers, so it would take at least 10 calculations for that Lc0 network to be installed on all backends, at which point the download step won’t happen anymore.

The production servers have been running a lot longer and have have more networks stored so that’s why they usually start calculating faster.

We should probably be proactively downloading the more popular networks.

I appreciate all the work that you put in this project. I am a paying member. Is there a manual that explains all the functions of the site