this post was submitted on 29 Nov 2023
1224 points (99.2% liked)
Programmer Humor
19623 readers
2740 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Basically scripts you can run on the fly to pull calculated data. You can (mostly) treat them like tables themselves if you create them on the server.
So if you have repeat requests, you can save the view with maybe some broader parameters and then just SELECT * FROM [View_Schema].[My_View] WHERE [Year] = 2023 or whatever.
It can really slow things down if your views start calling other views in since they're not actually tables. If you've got a view that you find you want to be calling in a lot of other views, you can try to extract as much of it as you can that isn't updated live into a calculated table that's updated by a stored procedure. Then set the stored procedure to run at a frequency that best captures the changes (usually daily). It can make a huge difference in runtime at the cost of storage space.
They can be in some cases! There's a type of view called an "indexed" or "materialized" view where the view data is stored on disk like a regular table. It's automatically recomputed whenever the source tables change. Doesn't work well for tables that are very frequently updated, though.
Having said that, if you're doing a lot of data aggregation (especially if it's a sproc that runs daily), you'd probably want to set up a separate OLAP database so that large analytical queries don't slow down transactional queries. With open-source technologies, this is usually using Hive and Presto or Spark combined with Apache Airflow.
Also, if you have data that's usually aggregated by column, then a column-based database like Clickhouse is usually way faster than a regular row-based database. These store data per-column rather than per-row, so aggregating one column across millions or even billions of rows (eg average page load time for all hits ever recorded) is fast.