Skip to content

MERCURY FACILITY PROJECT GOAL

(In Development) Project Mercury aims to provide contract-free, honest, virtual machine hosting platform built on dedicated infrastructure, intended for enterprise. An intentionally under-subscribed product with highly predictable performance on a privately managed network.

MERCURY FACILITY PROJECT DETAILS

Virtual Machine hosting is one of the more complex infrastructure products provided by technology companies. Orchestration of network, routing, firewalls, virtualized hardware components, physical hardware assembly, and OS tuning is a fairly large project to tackle. After a decade of heavy use as a client of this field, we've learned through a good deal of pain the solutions we'd like to see built to maximize productivity on a managed compute network. We currently call this orchestration facility, 'Project Mercury'.

Our resource allocation model aims to defeat the common problem of over-loaded ratios and noisy neighbors immediately upon activation by being far more conservative with over-subscription of compute and memory. Our model expects every client is a 1:1 vCPU:pCPU ratio allocated device, and memory is not to be overcommitted, thereby never over-subscribing resources to clients beyond the limits of the physical hardware.

KYNGINs Project Mercury is designed to be a managed service from the start. No need to bolt-on extra (costly components) for DHCP, Firewall, Point-in-Time snapshots, and VPN tunnels. Otherwise simple (or complex, but necessary) things should be included as standard, without hidden up-charges customers realize were necessary only after adoption to a new cloud provider. All of this is included in the standard pricing model, of which, just like the Storage Facility, is priced to always include a PiT storage allocation designed for Point in Time snapshots which is (currently) equal in size as the active storage allocation. Effectively, if you pay for 400GB of storage on a VM, you are provided an additional 400GB of snapshot allocation, included in the storage allocation pricing. Just like the Storage Facility, PiT services are active and operational by default with zero configuration necessary. For high availability, options for off-node backup are performed using the same included PiT nightly snapshots of hard-disks and resource configurations, which allow for an easy and fast point in time recovery should a catastrophic event (such as a meteor strike, or multi-SSD simultaneous failure) happen.

Annual commitments frustrate us, but they have become necessary to get fair pricing from cloud providers. These annual commitments endorse vendor lock-in to resources and make the client less dynamic to fluctuating events in their product portfolio. As with all KYNGIN products, Mercury includes simple monthly invoices and no contracts. We will be beating those one year terms, and still allowing you to pay monthly because we believe that your success is our success.

MERCURY NODES

We currently have a single design spec for Mercury Nodes which targets recent generation AMD EPYC and Intel Scalable with 3.0GHZ frequency per core, and enough memory to provide 4GB per CPU core. All disks are enterprise NVME SSD's with 1m+ read IOPS, and 300K+ write IOPS. Promised IOPS per VM is determined by allocated storage size. Options are available to mount SMB / SSHFS / NFS (testing) file stores from the KYNGIN storage facility for more cost effective allocation of long-term/backup storage. Initially, to keep this project simple, all nodes are allocated as increments of 1+ CPU core, 4GB+ Memory, and 40GB+ NVME SSD. We will have recommendations, dependent on installed operating system soon.