Q. Kuzzle is free ?
Kuzzle is Full Open Source and published under Apache 2 License. https://github.com/kuzzleio/kuzzle/blob/master/LICENSE.md
Q. How to download and run Kuzzle on Windows
The easiest way is to use Docker and Docker compose for windows
You can get these tools here: https://docs.docker.com/docker-for-windows/
Then, you can use this docker-compose.yml file to mount a Kuzzle development stack: https://kuzzle.io/docker-compose.yml
An alternative, requiring a bit more work but the best choice to us, would be to have a Linux virtual machine where you can run kuzzle with docker. If you do manage to get a virtual machine you can install Kuzzle from this script https://get.kuzzle.io . Or you can directly use this docker file https://kuzzle.io/docker-compose.yml and run it with docker-compose.
Otherwise you can manually install Kuzzle, by following the prerequisites detailed in our documentation: https://docs.kuzzle.io/guide/1/essentials/installing-kuzzle/#manual-installation
We don’t advise using Kuzzle on a windows machine for production.
Q. What are the software requirements to install Kuzzle on-premises
The minimum software requirements are described by the following documentation page: https://docs.kuzzle.io/guide/1/essentials/installing-kuzzle/#manual-installation
Q. What are the hardware requirements to install Kuzzle on-premises
Elasticsearch and Redis have their own dedicated documentations (ES: https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x and Redis: https://docs.redislabs.com/latest/rs/administering/designing-production/hardware-requirements /).
For Kuzzle, you’ll need servers running on Linux. Apart from that, hardware requirements for Kuzzle depend on the pressure you expect, most notably the number of simultaneous connections, the expected requests/s throughput and the rough amount of real-time filters.
For basic usage, I would recommend at least 4 CPUs and 4GB RAM per node, and a cluster of at least 3 nodes. This should cover most needs.
From there, you can scale kuzzle horizontally whenever you need to handle more simultaneous connections, or if you need a higher number of requests/s processed.
In its current state, the only non-scalable feature is the number of real-time filters to index: a rough estimate we measured is that you need about 1GB RAM per million of _unique_ filters, so if you have millions of users each needing their own real-time filters, then you’ll have to add more RAM to each node.