[-] brokenlcd@feddit.it 4 points 2 days ago

Liking computers in general and switching to Linux at 15 out of desperation.

After that all it took was getting an shitbox pc as a hand me down to make me go "Linux is also used on servers right? Shouldn't be too difficult to setup something." And that's how I got the bug.

[-] brokenlcd@feddit.it 6 points 2 days ago

I once wired my whole ass house for ethernet. (Before realizing I was colorblind nonetheless.) Instead of studying.

Never underestimate how you can use study procrastination as a push force for other shit. (Unless you're a dipshit like me and do it with an imminent exam)

[-] brokenlcd@feddit.it 2 points 2 days ago

Just coming from a 2wk exam crunch. Hits way too close than I'm comfortable to admit

[-] brokenlcd@feddit.it 6 points 4 days ago

A summer night yes... But a night nonetheless.

61
submitted 4 months ago* (last edited 4 months ago) by brokenlcd@feddit.it to c/asklemmy@lemmy.world

I've recently managed to setup a modded Skyrim install. Wanted to setup and play for months. And now... Zero. Same thing happened with bloodlines a while back.

During the day the spark of wanting to play comes. But as soon as I get home. It just disappears. And end up doing other things. It feels like wanting to play the game is more appealing than actually playing it.

How do y'all manage it?

18
submitted 4 months ago* (last edited 4 months ago) by brokenlcd@feddit.it to c/selfhosted@lemmy.world

i'm trying to setup nginx to run as a proxy to aggregate multiple services. running on different ports on the server, using nginx to let me connect to all the services by going to a specific subdirectory. so i can keep only one port open in the router between my lab and the main house network.

i'm using the following config file from an example i found to do this, with a landing page to let me get to the other services:

used config file


server { listen 80; server_name 10.0.0.114; # Replace with your domain or IP

# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;

}

server { listen 1403 ssl; # Listen on port 443 for HTTPS server_name 10.0.0.114; # Replace with your domain or IP

ssl_certificate /certs/cert.pem;  # Path to your SSL certificate
ssl_certificate_key /certs/key.pem;  # Path to your SSL certificate key

location / {
    root /var/www/html;  # Path to the directory containing your HTML file
    index index.html;  # Default file to serve
}


location /transbt {
#configuration for transmission
    proxy_pass http://10.89.0.3:9091/;  
proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;$proxy_add_x_forwarded_for;
}

but the problem i'm having is that, while nginx does redirect to transmission's login prompt just fine, after logging in it tries to redirect me to 10.0.0.114:1403/transmission/web instead of remaining in 10.0.0.114:1403/transbt and breaks the page. i've found a configuration file that should work, but it manually redirects each subdirectory transmission tries to use, and adds proxy_pass_header X-Transmission-Session-Id; which i'm not sure what's accomplishing: github gist

is there a way to do it without needing to declare it explicitly for each subdirectory? especially since i need to setup other services, and i doubt i'll find config files for those as well it's my first time setting up nginx, and i haven't been able to find anything to make it work.

Edit: I forgot to mention. The server is still inside of a nat. It's not reachable by the outside. The SSL certificate is self signed and it's just a piece of mind because a lot of things connect to the home net. And none of the services I plan to use only support http.

23

i managed to build my first proper pc in the most hacky way, and it works wonderfully(the one of the previous post).

the only problem i have left now is that during the heavyest workload i need it to run; the card reaches 77°C and i'm not sure if it's dangerous for the card to be cycled between 77°C and 51°C while it writes to the hdd. due to thermal stress.

the problem isn't the air flow of the case, but the fact that the pc is placed in an under desk shelf, the heat is pushed backwards and outwards by the gpu and psu fans, but the hot air still rises toward the top, where the card intakes air.

i'm already seeing if i can put fans in the cubby under the desk, but i'm also seeing if i can undervolt the gpu to have it heat less, since from what i could understand the performance loss is minimal up to a certain point.

the problem with that is that nvidia doesn't expose the core voltage in the drivers for linux (... torvalds was right in this front). i found that there is a workaround to do that with LACT but i'm afraid it's going to mess the card's warranty or the card itself. what do you think?

27

after finally having some free time between exams and work, and enough money to build it. i decided to assemble a decent pc, both for interference and general usage. due to limited budget i chose to pick up a refurbished thinkcenter m700 and a 12GB 3060. the problem? the thinkcenter is an sff pc. so it would have never fit the card, plus due to using a proprietary psu i couldn't upgrade it to something that could run the card.

so that's when the quest began to see how i could ever shoehorn a card+ psu in this mess.

the first thing that arrived was the thinkcentre, so i got to work trying to find a way to make both the pc's and the gpu's psus on at the same time. so i needed some power that would turn on as soon as the pc turned on to power a relay, and thus, turn on the gpu psu.

Luckily the pc had two SATA connectors for powers, one of which i opted to put an ssd in, so the 12V line was free. it was a bit annoying since it used a CPU molex, but the box of scrap parts took care of that:

i ended up adding the relay on the 12V line to turn on the other supply

and the original connector that was in the pc on the 5V to power the ssd.

then it came time to fit the harness inside of the pc, i managed to snake it in... even if i had to mess with zip ties since i had spliced the ssd wire the wrong way around. but in the end, the pc side came out pretty well:

fast forward a couple of weeks (courtesy of the postal system shipping my package to the other side of the country by mistake), i got the card, the psu and the riser.

since i wasn't able to find a riser that turned 90° to the right, i had to place the gpu above the psu, and make a bracket to hold it up, since the riser cable was as stiff as rock. plus i had the idea that after it was all buttoned up, the psu fan would pull air through the gpu as well, somewhat aiding it.

after mocking it up with books, it didn't look too bad so i went on with it.

so now i had to make the bracket, the holes in the top cover to allow both the riser and the switched line out of the case, and find out how to hold and protect this whole mess... so to the workshop i work at we go.

luckily they allowed me in on sundays so i could use all the tools we had in there. (the joys of working as a small artisan :-D)

i have to admit, having a card worth so mutch in the midst of alluminium shaving felt wrong in a way i can't explain, in a laptop next to a pool way.

first thing first, the holes in the case, i just roughly marked where they where supposed to go, and i added the leeway to allow the panel to slide open. the riser hole was done with an angle grinder, while the switched line hole was done with a christmas tree drill bit to 12mm:

now i had to find something that could cover up the sharp ends of the cut, both to not destroy the riser cable and my fingers. luckily we had just bought new band saw blades, and the blade protectors fit perfectly for this job:

now to the psu and bracket for the gpu: my idea was to add two plates to anchor the gpu to the psu, using the card's pci mount to bolt it on. and then add some brackets to allow the psu to screw where the case screws went, locking it all in place:

it's ugly as sin, but in the end it was going to be covered up, so it didn't matter.

the card was locked in place with a nut and bolt in the hole where the screw to secure the card would go, and a bolt/washer/wing nut set to hold the other side, in between the two slot "teeth" the card has.

now i just needed something to hold up the back of the card, since holding it just from the faceplate felt like an extremely dumb idea.

an L extrusion with some of the blade protector on top did the job, i was even able to use the psu's fan screws to lock it in place:

now it was mechanically sturdy, it just lacked a shell to cover it up, in between the scraps i found a sheet of something that would work. i only know it from brand name, but it's essentially a foam panel sandwitched between two alluminium plates, if you cut only one panel, you can bend it and it looks pretty good. so i went with it.

i added L brackets on the pc panel with rivets to hold it steady, and made some holes in the panel to let the card exaust both out of the front and back.

(frankly if it wasn't for the psu cables i would have made it out of plexiglass, since seeing the card suspended like this is beautiful)

now it was just time to bring it out of the workshop and button it all up:

and that's it. i'm surprised it took around a week to build it all, excluding the exodus the gpu had to take to arrive to me.

the only problem i have left now is that during the heavyest workload i need it to run; the card reaches 77°C and i'm not sure if it's dangerous for the card to be cycled between 77°C and 51°C while it writes to the hdd. due to thermal stress.

the problem isn't the air flow of the case, but the fact that the pc is placed in an under desk shelf, the heat is pushed backwards and outwards by the gpu and psu fans, but the hot air still rises toward the top, where the card intakes air.

i'm already seeing if i can put fans in the cubby under the desk, but i'm also seeing if i can undervolt the gpu to have it heat less, since from what i could understand the performance loss is minimal up to a certain point.

the problem with that is that nvidia doesn't expose the core voltage in the drivers for linux (... torvalds was right in this front). i found that there is a workaround to do that with LACT but i'm afraid it's going to mess the card's warranty or the card itself. what do you think? (i'm going to post the question aside as well so people don't have to go through a bible worth of build montage)

i want to thank all the peeps in the !localllama@sh.itjust.works and !pcmasterrace@lemmy.world communities for helping me understand the technicalities of this whole mess, since i never had hardware this poweful at hand.

especially @Smokeydope@lemmy.world and @brucethemoose@lemmy.world from the locallama community for helping me figure out if it was even worthwhile to do this, and for giving me clues for setting up an enviroment to run it all.

and @fuckwit_mcbumcrumble@lemmy.dbzer0.com from the pcmasterrace community for helping me figure out air flow issues.

12

after finally having some free time between exams and work, and enough money to build it. i decided to assemble a decent pc, both for interference and general usage. due to limited budget i chose to pick up a refurbished thinkcenter m700 and a 12GB 3060. the problem? the thinkcenter is an sff pc. so it would have never fit the card, plus due to using a proprietary psu i couldn't upgrade it to something that could run the card.

so that's when the quest began to see how i could ever shoehorn a card+ psu in this mess.

the first thing that arrived was the thinkcentre, so i got to work trying to find a way to make both the pc's and the gpu's psus on at the same time. so i needed some power that would turn on as soon as the pc turned on to power a relay, and thus, turn on the gpu psu.

Luckily the pc had two SATA connectors for powers, one of which i opted to put an ssd in, so the 12V line was free. it was a bit annoying since it used a CPU molex, but the box of scrap parts took care of that:

i ended up adding the relay on the 12V line to turn on the other supply

and the original connector that was in the pc on the 5V to power the ssd.

then it came time to fit the harness inside of the pc, i managed to snake it in... even if i had to mess with zip ties since i had spliced the ssd wire the wrong way around. but in the end, the pc side came out pretty well:

fast forward a couple of weeks (courtesy of the postal system shipping my package to the other side of the country by mistake), i got the card, the psu and the riser.

since i wasn't able to find a riser that turned 90° to the right, i had to place the gpu above the psu, and make a bracket to hold it up, since the riser cable was as stiff as rock. plus i had the idea that after it was all buttoned up, the psu fan would pull air through the gpu as well, somewhat aiding it.

after mocking it up with books, it didn't look too bad so i went on with it.

so now i had to make the bracket, the holes in the top cover to allow both the riser and the switched line out of the case, and find out how to hold and protect this whole mess... so to the workshop i work at we go.

luckily they allowed me in on sundays so i could use all the tools we had in there. (the joys of working as a small artisan :-D)

i have to admit, having a card worth so mutch in the midst of alluminium shaving felt wrong in a way i can't explain, in a laptop next to a pool way.

first thing first, the holes in the case, i just roughly marked where they where supposed to go, and i added the leeway to allow the panel to slide open. the riser hole was done with an angle grinder, while the switched line hole was done with a christmas tree drill bit to 12mm:

now i had to find something that could cover up the sharp ends of the cut, both to not destroy the riser cable and my fingers. luckily we had just bought new band saw blades, and the blade protectors fit perfectly for this job:

now to the psu and bracket for the gpu: my idea was to add two plates to anchor the gpu to the psu, using the card's pci mount to bolt it on. and then add some brackets to allow the psu to screw where the case screws went, locking it all in place:

it's ugly as sin, but in the end it was going to be covered up, so it didn't matter.

the card was locked in place with a nut and bolt in the hole where the screw to secure the card would go, and a bolt/washer/wing nut set to hold the other side, in between the two slot "teeth" the card has.

now i just needed something to hold up the back of the card, since holding it just from the faceplate felt like an extremely dumb idea.

an L extrusion with some of the blade protector on top did the job, i was even able to use the psu's fan screws to lock it in place:

now it was mechanically sturdy, it just lacked a shell to cover it up, in between the scraps i found a sheet of something that would work. i only know it from brand name, but it's essentially a foam panel sandwitched between two alluminium plates, if you cut only one panel, you can bend it and it looks pretty good. so i went with it.

i added L brackets on the pc panel with rivets to hold it steady, and made some holes in the panel to let the card exaust both out of the front and back.

(frankly if it wasn't for the psu cables i would have made it out of plexiglass, since seeing the card suspended like this is beautiful)

now it was just time to bring it out of the workshop and button it all up:

and that's it. i'm surprised it took around a week to build it all, excluding the exodus the gpu had to take to arrive to me.

after running it for the first time with my usual model (a nemo 12B) i have to say... holy shit if there isn't a difference between running at 3tok/s on the deck and 30 tok/s, i was expecting an increase, but not a 10x one. right now i'm converting some 24B models to exl3 3bpw to finally see how they fare.

the only problem i have left now is that during the conversion ( the heavyest workload i managed to throw at it) the card reaches 77°C and i'm not sure if it's dangerous for the card to be cycled between 77°C and 51°C while it writes to the hdd. due to thermal stress.

the problem isn't the air flow of the case, but the fact that the pc is placed in an under desk shelf, the heat is pushed backwards and outwards by the gpu and psu fans, but the hot air still rises toward the top, where the card intakes air.

i'm already seeing if i can put fans in the cubby under the desk, but i'm also seeing if i can undervolt the gpu to have it heat less, since from what i could understand the performance loss is minimal up to a certain point.

the problem with that is that nvidia doesn't expose the core voltage in the drivers for linux (... torvalds was right in this front). i found that there is a workaround to do that with LACT but i'm afraid it's going to mess the card's warranty or the card itself. what do you think? (i'm going to post the question aside as well so people don't have to go through a bible worth of build montage)

i want to thank all the peeps in the !localllama@sh.itjust.works and !pcmasterrace@lemmy.world communities for helping me understand the technicalities of this whole mess, since i never had hardware this poweful at hand.

especially @Smokeydope@lemmy.world and @brucethemoose@lemmy.world from the locallama community for helping me figure out if it was even worthwhile to do this, and for giving me clues for setting up an enviroment to run it all.

and @fuckwit_mcbumcrumble@lemmy.dbzer0.com from the pcmasterrace community for helping me figure out air flow issues.

10
submitted 6 months ago* (last edited 6 months ago) by brokenlcd@feddit.it to c/pcmasterrace@lemmy.world

I'm hacking together a gaming/fluid simulation pc from a lenovo thinkcentre m700 sff. I've already got everything set up. The problem is that the only riser i managed to find has a 90° bend to the left. (Looking from where the bracket would be.) The problem is that like this my only way to make it fit is to turn the 3060 fans up. With the extra psu powering it under it. With ~7 cm of clearance. (i'll make some stands to hold it properly) What i'm not sure is if it's going to make a big difference that it's going to push hot air downwards. The power supply fan runs continuously and it draws air downwards and out of the back. But i don't know if it will be enough.

What do you think. Is it going to make it get hotter a lot?

9
submitted 7 months ago* (last edited 7 months ago) by brokenlcd@feddit.it to c/localllama@sh.itjust.works

I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.

What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

Do you think it's even worth going through?

Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.

15
submitted 11 months ago by brokenlcd@feddit.it to c/asklemmy@lemmy.ml

I am making a Public repo for the first time. It's not mutch , just a script to setup a podman container. But i am afraid of people messing up their devices and blaming me 😅. Do you guys think i should put a disclaimer in the main page of the repo? If yes, what kind of disclaimer should i use? I've never really dived into the legal side of things.

13

I was experimenting with oobabooga trying to run this model but due to it's size it wasn't going to fit in ram, so i tried to quantize it using llama.cpp, and that worked, but due to the gguf format it was only running on the cpu. searching for ways to quantize the model while keeping it in safetensors returned nothing; so is there any way to do that?

I'm sorry if this is a stupid question, i still know almost nothing of this field

7

I've been trying to install rocm on the deck following this guide

(https://old.reddit.com/r/SteamDeckTricks/comments/102xxww/guide_how_to_install_rocm_for_gpu_julia/)

with the only exception that i used a generic ubuntu container on podman instead of using distrobox; I reach to the point where i need to use amdgpu-install to run rocm, but when i do it tries to install 30gb of files; which unfortunately it's not feasible on my metered connection and storage (╥﹏╥). From what i saw from this issue on github there should be a way to select only the needed binaries for the specific chipset;

The question is, what should i do to install only the stuff necessary for the deck?

Sorry for the long post.

[-] brokenlcd@feddit.it 226 points 1 year ago* (last edited 1 year ago)

When an artificer and a rouge are faced with the same problem.

Edit: my english sucks

[-] brokenlcd@feddit.it 104 points 2 years ago

Jokes on you i'm into that shit

[-] brokenlcd@feddit.it 284 points 2 years ago

It seems like a flavour of the rubber duck method; by trying to explain it to a third party, you think about it in a different way and find a solution.

[-] brokenlcd@feddit.it 129 points 2 years ago

I mean, if we want to get pedantic, nothing it's stopping a virus from bringing it's own drivers or a whole ass windows vm to pass the usb over ( i rememver the was something of the sort for windows using a windows xp machine for a botnet) It's as always just a matter of how willing are you.

[-] brokenlcd@feddit.it 129 points 2 years ago* (last edited 2 years ago)

He threatened you to either buy a new book or he would make your uni career hell, one of my mates did it, at the last exam he sent him back 5 times, the last time he went to take the exam the coordiator said "what else have you got to ask to him; he told you everything in your course; [insert name] give me the paper" he signed the paper and sent him off; the prof. Still gave him only 60/100.

I still want to slap that piece of shit.

After that i taught other people in the uni to do that; he tried to mitigate by writing over the printed title of the book; hoping that any tampering would be evident; toluene didn't touch the toner, so it didn't work

Edit: grammar mistake (thanks mac)

[-] brokenlcd@feddit.it 220 points 2 years ago* (last edited 2 years ago)

I have had uni professors sign books to make sure people actually bought new books and not used ones (he wrote them); unfortunately for him i had access to toluene to get pen ink off; did the same to all of my peers; Fuck those kind of professors

23

I remember seeing on the steam deck community someone mentioning a patch that reduced the size of the assets and, consequentially, the size of the whole game; does it actually exist or did i just hallucinate it?

[-] brokenlcd@feddit.it 116 points 2 years ago

To be honest i'd prefer it to be there at all considering the current trend of removing it.

But to answer your question, jack on top, so when the phone is plugged in the wall you can stay on the other side since the headphone cable wont be bent

38
submitted 2 years ago by brokenlcd@feddit.it to c/android@lemmy.world

When android auto first came out i remember that on the market appeared android sticks that were meant to expand the functionality of android auto. They where like the android auto wireless adaptors that are around nowdays but where a standalone android device that used a phone's hotspot for an internet connection. Are these sticks still around or are they completely gone?

[-] brokenlcd@feddit.it 123 points 2 years ago

I think your best option is to find a used one that supports valetudo

view more: next ›

brokenlcd

0 post score
0 comment score
joined 2 years ago