yuu

joined 2 years ago
[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

oh this is one of my wallpapers

i did a 1920x1080 version out of it by horizontally tiling 3 duplicates of it like this (i got the freely licensed version from wikimedia commons under https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Observable_Universe_Logarithmic_Map_%28horizontal_layout_english_annotations%29.x1080-tiled.png

[–] [email protected] 11 points 1 year ago

just use a community-lead or non-profit foundation lead distro: NixOS (better than silverblue/kinoite in all aspects they try to sell), Arch, or Debian.

For professional usage, you generally go Ubuntu, or some RHEL derivative.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

When I was packaging Flatpaks, the greatest downside is

No built in package manager

There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies... So, I personally am not interested in packaging for flatpak other than in very rare occasions... Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!

[–] [email protected] 0 points 1 year ago (2 children)

Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich

Ironically enough, it is free software https://github.com/SafeExamBrowser

 

cross-posted from [email protected]: https://group.lt/post/46385

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
 

cross-posted from c/[email protected]: https://group.lt/post/44632

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.