Hey Pythonistas, is this nuts or genius:
I should teach people to mainly use in-memory SQLite databases. You save them to disk when done, but also you load them from disk and then into an in-memory db. The in-memory ones are like 100x faster, especially for writes. You can wrap the code it in a try-except that saves it to disk in the event of crashes.
Are there any problems with this approach? Why not always use in-memory dbs?
@AlSweigart for many (maybe MOST!) tasks where sqlite is used, your approach would be just fine.
What could possibly go wrong?
If your program can run multiple instances on the same sqlite file path, the last instance to close would "win" and overwrite the database for the rest.
If your program is storing configuration in sqlite files, other instances cannot see the current state of thing as long as the app is running.
Finally, not all crashes will nicely recover at `try:finally:` (think C).
@AlSweigart Honestly I could've used such a tutorial while I was learning Python and desperately searching for some practical SQL tutorial back in the day when I was trying to get a role in data science.
The only issue I could possibly see is that SQLite habits might not carry well to other SQL databases, but it's probably a moot point. I don't really have enough experience to comment either way!
@AlSweigart depends on how big the DB is.