NoSQL agent with SQL back-end(s)
2018-12-29 15:41 nosqlidea [permalink]
It happened again. I get this great idea that slowly develops, and gives the feeling I'm on to something, but nowhere near any time in the forseeable future to put in to it and get a proof-of-concept of a first project that makes it work. So, for what it's worth, I write it down here fast in the hope sharing it with you may give more chance to this idea getting useful.
I've read that some NoSQL solutions are actually about eventual consistency, meaning in the best case of a query for data that was just inserted or updated could already return this new data if the server(farm) ad already fully processed it. Worst case is that it just for a few milliseconds totally disappears, but that's another story altogether.
I haven't done anything serious with NoSQL yet, and really a lot on good old SQL, and recently with SQLite which I've really grown to love in a short period of time. But still there's something there that's really suited for the new style of programming that is going on with all the new web projects and this 'Internet of Things' everything is on about... To find out, I've been trying interfacing with a number of them from Delphi in the most direct way I could possibly find and make work with reasonable effort. I like how TMongoWire worked out, to talk with PostgreSQL all you need is in the libPQ.dll, but a number of others just stick with a plain HTTP API where you PUT and GET things on their own URL. There's a beauty to that, really. The structure of your documents is nicely contained in JSON, and HTTP is such a stable platform you're sure to be able to access it from almost any platform.
So that's where the idea came from: what if I made my own service where you can just put or get JSON documents? But on the back-end jsonDoc would do the heavy lifting and the storage itself could be in a decent SQL service. And/or it could be in something intermediate like memcached. And/or the saving to storage could be asynchronous somewhere close after the actual PUT call (hence the eventual consistency).
For example, you fill a collection of items with things with a number of fields, for example one is "Price", but later you need the items above or below a certain price, you would do SQL "select * from Items where Price<@p". So in this service I'm imagining, there would be meta-description on the collection that you've provided a SQL database somewhere, but the service is responsible for having done the "create table Items (ID some primary key, DATA json, Price decimal(8,2))" and filling it with the data.
And this would be the beauty of it: if you need an extra column later, you just say so, and the connector would be responsible for the "alter table Items add ..." and filling that column with the data from the stored items. Perhaps even slowly, asynchronously together with the other work. Or even another connector alltogether, let's say PostgreSQL and MySQL side by side, perhaps even as a fail-over for eachother.
But I'm dreaming. It would be a load of work just to get something to work, and even more work to get enough connectors to work good enough to even demonstrate how it would work. And then there's the performance trails... And the evangelising to see wether it solves other people's problems anyway... It would be a really great opportinity to finally cut my teeth on this IOCP thing.