[-] [email protected] 2 points 1 day ago

I developed this script for creating permanent/static archives of social media exports, so it's not a full solution - not a web service, expects file inputs, uses a probably incomplete list of shorteners to avoid pulling real pages - but it along with the shorteners.txt file in the same repository, iterating to find a domain not on the list, might at least inspire a solution, if it's not good for your specific cases.

13
submitted 3 weeks ago by [email protected] to c/[email protected]

(Apologies in advance if this is the wrong spot to ask for help, and/or if the length annoys people.)

I'm trying to set up 2FAuth on a local server (old Raspberry Pi, Debian), alongside some other services.

Following the self-hosting directions, I believe that I managed to get the code running, and I can get at the page, but can't register the first/administrative/only account. Presumably, something went wrong in either the configuration or the reverse-proxy, and I've run out of ideas, so could use an extra pair of eyes on it, if somebody has the experience.

The goal is to serve it from http://the-server.local/2fa, where I have a...actually the real name of the server is worse. Currently, the pages (login, security device, about, reset password, register) load, but when I try to register an account, it shows a "Resource not found / 404" ("Item" in the title) page.

Here's the (lightly redacted) .env file, mostly just the defaults.

APP_NAME=2FAuth
APP_ENV=local
APP_TIMEZONE=UTC
APP_DEBUG=false
[email protected]
APP_KEY=base64:...
APP_URL=http://the-server.local/2fa
APP_SUBDIRECTORY=2fa
IS_DEMO_APP=false
LOG_CHANNEL=daily
LOG_LEVEL=notice
CACHE_DRIVER=file
SESSION_DRIVER=file
DB_CONNECTION=sqlite
DB_DATABASE=/var/www/2fauth/database/database.sqlite
DB_HOST=
DB_PORT=
DB_USERNAME=
DB_PASSWORD=
MYSQL_ATTR_SSL_CA=
MAIL_MAILER=log
MAIL_HOST=my-vps.example
MAIL_PORT=25
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_NAME=2FAuth
[email protected]
MAIL_VERIFY_SSL_PEER=true
THROTTLE_API=60
LOGIN_THROTTLE=5
AUTHENTICATION_GUARD=web-guard
AUTHENTICATION_LOG_RETENTION=365
AUTH_PROXY_HEADER_FOR_USER=null
AUTH_PROXY_HEADER_FOR_EMAIL=null
PROXY_LOGOUT_URL=null
WEBAUTHN_NAME=2FAuth
WEBAUTHN_ID=null
WEBAUTHN_USER_VERIFICATION=preferred
TRUSTED_PROXIES=null
PROXY_FOR_OUTGOING_REQUESTS=null
CONTENT_SECURITY_POLICY=true
BROADCAST_DRIVER=log
QUEUE_DRIVER=sync
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
VITE_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
VITE_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
MIX_ENV=local

Then, there's the hard-won progress on the NGINX configuration.

server {
    listen 80;
    server_name the-server.local;
# Other services
    location /2fa/ {
        alias /var/www/2fauth/public/;
        index index.php;
        try_files $uri $uri/ /index.php?$query_string;
    }
    location ~ ^/2fa/(.+?\.php)(/.*)?$ {
        alias /var/www/2fauth/public/;
        fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        set $path_info $fastcgi_path_info;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root/$1;
        include fastcgi_params;
    }
# ...and so on

I have tried dozens of variations, here, especially in the fastcgi_param lines, almost all of which either don't impact the situation or give me a 403 or 404 error for the entire app. This version at least shows login/register/about pages.

While I would've loved to do so, I can't work with the documentation's example, unfortunately, because (a) it presumes that I only want to run the one service on the machine, and (b) doesn't seem to work if transposed to a location. They do have the Custom Base URL option, but it doesn't work. That just gives me a 403 error (directory index of "/var/www/2fauth/public/" is forbidden, client: 192.168.1.xxx, server: the-server.local, request: "GET /2fa/ HTTP/1.1", host: "the-server.local", and again I emphasize that the permissions are set correctly) for the entire app, making me think that maybe nobody on the team uses NGINX.

Setting both NGINX and 2FAuth for debugging output, the debug log for NGINX gives me this, of the parts that look relevant.

*70 try files handler
*70 http script var: "/2fa/user"
*70 trying to use file: "user" "/var/www/2fauth/public/user"
*70 http script var: "/2fa/user"
*70 trying to use dir: "user" "/var/www/2fauth/public/user"
*70 http script copy: "/index.php?"
*70 trying to use file: "/index.php?" "/var/www/2fauth/public//index.php?"
*70 internal redirect: "/index.php?"

And the Laravel log is empty, so it's not getting that far.

Permissions and ownership of 2FAuth seem fine. No, there's no /var/www/2fauth/public/user, which seems to make sense, since that's almost certainly an API endpoint and none of the other "pages" have files by those names.

I have theories on what the application needs (probably the path as an argument of some sort), but (a) I'm not in the mood to slog through a PHP application that I don't intend to make changes to, and (b) I don't have nearly the experience with NGINX to know how to make that happen.

It seems impossible that I'm the first one doing this, but this also feels like a small enough problem (especially with a working desktop authenticator app) that it's not worth filing a GitHub issue, especially when their existing NGINX examples are so...worryingly off. So, if anybody can help, I'd appreciate it.

[-] [email protected] 27 points 6 months ago

I've been using different versions of SearX for a long while (sometimes on my server, sometimes through a provider like Disroot) as my standard search engine, since I've never had great luck with the big names, and it's decent, but between upstream provider quota limits, and just the fact that it relies on corporate search APIs at all, sometimes the quality craters.

While I haven't had the energy to run YaCy on my own, and public instances tend to not have a long life, I don't have nearly as much experience with it, but when I have gotten to try it out, the search itself looked great, but generally didn't have as broad or current an index. Long-term, though, it (and its protocol) is probably going to be the way to go, if only because a company can't randomly tank it like they can with the meta-search systems or their own interfaces.

Looking at Presearch for the first time now, the search results look almost surprisingly good if poorly sorted, but the fact that I now know orders of magnitude more about their finances and their cryptocurrency token than what and how the thing actually searches makes me worry a bit about its future.

[-] [email protected] 15 points 8 months ago

I believe that YouTube supports RSS. I haven't used it in years, but gPodder allowed subscribing to channels.

Ah, yeah. From this post:

  • Go to the YouTube channel page.
  • Click more for the About box.
  • Scroll down to click Share channel. Choose Copy channel ID.
  • Get the feed from https://www.youtube.com/feeds/videos.xml?channel_id plus that channel ID from the previous step.

From there, something (like a podcast client) needs to grab the video.

Otherwise, I've been using Tartube to download to my media server, which is not great but fine, except for needing to delete the lock file when it (or the computer) crashes, and the fact that the media server hasn't the foggiest idea of how to organize the "episodes."

[-] [email protected] 4 points 1 year ago

I can't vouch for anything about it, since I've never done more than look and bookmark the page, but Vidzy at least exists and has an instance that plays one short video...

[-] [email protected] 3 points 1 year ago

Likewise, feel free to reach out if you need a hand. I don't always have time, but I do my share of weird programming.

[-] [email protected] 6 points 1 year ago* (last edited 1 year ago)

Always good to see more effort to surface these things. A couple of possible enhancements come to mind.

  • Pepper & Carrot probably belongs under comics, and/or comics belongs as a subset of fiction.
  • It'd be great to filter by license, maybe similar to what Openverse (which you already have listed) does. I know that Creative Commons doesn't see a problem with incompatible licenses, but I feel like people in the space have strong feelings about how "free/libre" it is to say that something can't be used commercially (whatever that means) or can't be altered.
  • If you want a pile of fiction of various sorts, at the risk of self-promoting, I spotlight (and ideally have discussions around) Free Culture works on Saturdays. https://john.colagioia.net/blog/tag/bookclub/ (And a bunch of the links actually lead to collections.)
  • Another pile, you'll need to figure out how to sift through on your own (I haven't had the time to figure out how to parse it), but Chris "Sanglorian" Sakkas posted the (I imagine) final backup of his Free and Open Works wiki, sort of your predecessor project. (Edit: I stupidly forgot the link https://archive.org/details/freeand-open-works-20200811084450)
  • Too much manual labor, I realize, especially as the list expands, but ideally, it'd be nice to have some idea of what lives at the other end of a link beyond the format. The videos especially could plausibly be anything...

Thanks for getting this rolling!

[-] [email protected] 3 points 2 years ago

Hate to be the bearer of bad news, but I actually summarized a section of the hilariously reactionary open letter in support of Stallman.

He is usually more focused on the philosophical underpinnings, and pursuing the objective truth and linguistic purism, while underemphasising people’s feelings on matters he’s commenting on. This makes his arguments vulnerable to misunderstanding and misrepresentation…

People genuinely signed onto "objective truth" and "linguistic purism" making him "vulnerable to misunderstanding." If strawmen happen to stand among his most vocal supporters, that's not remotely my problem.

But no, "there's an AGPL that you can hunt for, and maybe someday they'll have an opinion on machine learning" isn't a counter-argument, to me. Those make my point for me, that they've never really cared about anything until it was far too late. I'm not going to tell you not to support them, but I'll thank you for not telling me that I'm wrong for using their behavior and that of their supporters to assess them.

[-] [email protected] 33 points 2 years ago* (last edited 2 years ago)

I keep saying "no" to this sort of thing, for a variety of reasons.

  1. "You can use this code for anything you want as long as you don't work in a field that I don't like" is pretty much the opposite of the spirit of the GPL.
  2. The enormous companies slurping up all content available on the Internet do not care about copyright. The GPL already forbids adapting and redistributing code without licensing under the GPL, and they're not doing that. So another clause that says "hey, if you're training an AI, leave me out" is wasted text that nobody is going to read.
  3. Making "AI" an issue instead of "big corporate abuse" means that academics and hobbyists can't legally train a language model on your code, even if they would otherwise comply with the license.
  4. The FSF has never cared about anything unless Stallman personally cared about it on his personal computer, and they've recently proven that he matters to them more than the community, so we probably shouldn't ever expect a new GPL.
  5. The GPL has so many problems (because it's been based on one person's personal focuses) that they don't care about or isolate in random silos (like the AGPL, as if the web is still a fringe thing) that AI barely seems relevant.

I mean, I get it. The language-model people are exhausting, and their disinterest in copyright law is unpleasant. But asking an organization that doesn't care to add restrictions to a license that the companies don't read isn't going to solve the problem.

[-] [email protected] 6 points 2 years ago* (last edited 2 years ago)

In addition to YaCy and the varieties of Searx (both of which perform better for me than any of the commercial search engines), it's not even out of the question to do this yourself, if you're willing to start with the most recent Common Crawl dump and do some spidering in between releases. I don't recommend it, unless you want to learn for yourself why search engines often give such miserable results, but it's possible.

However, that's the issue, here. Can you self-host a search engine? Sure, if you want to maintain the storage to back it. That depends on how deep your pockets go...

[-] [email protected] 4 points 2 years ago

It's not as clean a solution as they'd like it to be, but for another option, Jellyfin hosts media including books. When I say "not as clean," I mean that you can stream video and music from the server, but it has you download books to read on another device. Last I heard, they were looking to integrate at least a PDF viewer into the interface, though.

[-] [email protected] 7 points 2 years ago

Granted, I don't run instances of anything yet, but speaking as someone who has been on the Internet for a while, including in moderation capacities...

  • Yes, obviously make mental health treatment more accessible, but if it has gotten to the point where it's needed (as opposed to the equivalent of checkups and maintenance), then things have already gotten out of hand.
  • Moderation needs to happen as a team or community, because you can't take a break if it's all on you. At that point, problems grow while you try to heal, and you come back to a worse situation than you started with.
  • While we should pay moderators for their time, because their time is valuable, that's also not a solution, just basic respect. People with high-paying jobs burn out, too.
  • Long term, though I obviously have no authority or sway in these matters, the idea of "moderation" should probably be replaced by "governance," because governance carries the connotation of distributed responsibility. The person who decides whether to discipline in a given case isn't the same person who metes out the discipline. Neither of them decide appeals on the decision, and none of them work without oversight. Also, the expansion of the Fediverse is largely a shift away from feudal governance to more-but-not-totally-democratic governance, which I think is more comprehensible to most people than "the owner of your server (who you've never really considered as a person) can't put up with your crap anymore and is pulling the plug."

That's unfortunately not complete or a useful policy proposal, but hopefully those off-the-cuff ideas will spur something more worthwhile.

[-] [email protected] 3 points 2 years ago

My half-solution to this has always been to refer to where I'm working in my notes, like a file, method name, and maybe control structure if warranted. I've never needed to take that final step (hence half-solution), but this carries about enough information that someone could hack together a quick program to merge the notes and code in a reasonable way.

While (as I say) I've never specifically needed it, though, at work I've often wanted to do that and take the next step of sifting through version control, the ticketing system, and team chats to pull a complete view of what's been happening around a particular chunk of code. I point that all out, because I think that you're on the right track, however you ultimately solve that problem for yourself.

view more: next ›

jcolag

0 post score
0 comment score
joined 2 years ago