1
18
submitted 2 weeks ago by [email protected] to c/[email protected]
2
4
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]

Hello! Basically, I need to process a very large (4000 lines) file and free ai chatbots like chatgpt aren't able to handle it. I would like to split it into smaller parts and process each part separately. I'm having however a very hard time finding a chatbot with free API. the only one I found is huggingchat, but after a few requests waiting 1 seconds before sending the next one it starts giving rate limit errors.

any suggestion? thanks in advance!

EDIT: I also tried to run gpt4all on my laptop (with integrated graphics) and it took like 2-5 minutes to asnwer a simple "hello" prompt, so it's not really feasable :(

3
5
submitted 1 month ago by [email protected] to c/[email protected]

Hi! I've been working with Hugo for a while and I also created a free MIT licensed theme with it!. I love the flexibility and the ease of use.

But I'll have to wok on a bit more complicated project than a simple showcase website/blog. The content to be published on it is not a lot, but it would be definitely better if I could:

  • Get/Post some content with API to avoid posting multiple times the same articles on different platforms, getting modifications as well.
  • Send posts digests via email / Download PDF post digests.
  • Post on social medias (?)
  • Parse some content from CSV files / I don't know anything about databases.

Now I know that I can do something like this with a little systemd service I might write on my own and something like Zapier + RSS feed + Mailchimp. Also I could leverage Hugo modules and the .GetRemote / transform.unmarshal command, to get content from remote sources.

Now I'm not really a lot more than an amateur developer, I was thinking a headless CMS could pheraps do this stuff and more in a better way (?). I'm not a webdev and I know only really really basic JavaScript, I can use Bootstrap for frontend confidently and add SCSS to it. I know a bit of Rust too.

Would it be worth to take the time learning how headless CMS's work? I don't really want to go back managing Wordpress plugins, updates ecc.

Do you think I'm going out of a static site generator purpose with this kind of project?

4
22
submitted 1 month ago by [email protected] to c/[email protected]
5
3
submitted 2 months ago by [email protected] to c/[email protected]

I wrote this implementation many years ago, but I feel it didn’t receive the recognition it deserved, especially since it was the first freely available. So, better late than never—I’d like to present it here. It’s an algorithm for computing the Weighted Voronoi Diagram, which extends the classic Voronoi diagram by assigning different influence weights to sites. This helps solve problems in computational geometry, geospatial analysis, and clustering, where sites have varying importance. While my implementation isn’t the most robust, I believe it could still be useful or serve as a starting point for improvements. What do you think?

6
4
submitted 2 months ago by [email protected] to c/[email protected]

AWS Certified Cloud Practitioner - Free Practice Quiz

Hey 👋,

Recently I obtained the certificate and I thought it would be great to have a free quiz!

7
11
submitted 2 months ago by [email protected] to c/[email protected]
8
13
submitted 2 months ago by [email protected] to c/[email protected]
9
21
submitted 2 months ago by [email protected] to c/[email protected]

Let's say I have to host 25 websites.. How do I know how powerful should my VPS be? Which specs it should have, how fast the connection should be to handle X visits per day?

How do you understand which are your system requirements BEFORE deploying a project? Do you just make estimates and then scale up? Or there's some kind of tool to benchmark? how to handle this kind of stuff?

10
43
submitted 3 months ago by [email protected] to c/[email protected]
11
7
submitted 3 months ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.ml/post/25903184

I wrote a CLI tool that generates basic scaffolding for all sorts of coding projects, from Zig applications to NPM packages.

Feel free to ask questions or contribute!

12
11
submitted 3 months ago by [email protected] to c/[email protected]

I am a jr. Dev, but not imaginative. I find most programming languages to be poorly advertised and explained. Usually they assume too much knowledge.

What would I use Web Assembly, like Goblin (https://spritely.institute/), for? Are there examples of Web Assembly programs?

It is for web pages, or online games? Can they be run offline?

13
11
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]

Hi all, I am one year into my coding journey and could use a little guidance on a project. I have created a Python program that provides real-time hockey stats and game information, using API calls (documentation here: https://github.com/Zmalski/NHL-API-Reference). The code is working, and it's really fun to see stats updating in real time as I'm watching my favorite sport. Just one problem: this is all happening in the terminal window 😆 What Python library would you recommend for creating a pleasing visual display for dynamically-generated data? I thought it might be Pygame, which I have some experience with, but now I'm not sure. Right now I'm only presenting text and numbers, not drawing any plots, but should I consider matplotlib? Thanks for any advice!

Edit: I think the term for what I'm trying to make is a "dashboard"? Is that correct, and does that help answer the question? Thanks y'all!

14
5
submitted 3 months ago by [email protected] to c/[email protected]

Hi! I'm trying to achieve this configuration: essentially all the traffic in the network should pass the content filtering in the proxy, assume I have control over the clients. All not proxied traffic should be blocked by default.

I know not all network traffic can pass through proxy, but I'm not sure I understand how actually all of this work.

My UFW firewall configuration is the following:

To                         Action      From
--                         ------      ----
3128                       ALLOW OUT   Anywhere                  
53                         ALLOW OUT   Anywhere        

53 is for DNS requests (that cannot pass through the proxy), even if I use DOH this port needs to be open for bootstrapping.

3128 is Squid proxy port.

I'm assuming the following:

client -> dns request (53) / cannot be handled by the proxy -> dns response client -> proxy (all ports that the proxy can handle) -> http/https/ftp response client -> blocked (all other ports)

But from UFW logs it looks like the client is trying to make requests (eg. https requests) directly through port 443, instead that passing from 3128.. Maybe I'm getting something wrong here on how Proxies work.

Do you have any suggestion?

15
15
submitted 4 months ago by [email protected] to c/[email protected]

After finding myself another time completely sucked by all the internet bullshit, finding a huge difficulty doing whichever task involves using a computer, I think I am finally sure to address the need of better moderation on the internet.

Not for platforms, but for people.

So many people have these kind of problem with the internet: there is too much dispersion, it is hard to use it while staying focused on a single task and it's hard not getting hooked by one of the countless activities you can find online. Weren't PC conceived to be "bycicles for the brains"? The internet was supposed to be a place where to share things to each other. It's a tool, but a tool needs to be at the service of who's using it, and so many people nowadays have no fucking idea of what they are doing. And even between tech savvy people (I include myself here too), how many of us do you think could decide to stop using all the bad side (because we know it's bad, I mean we're on Lemmy) of the internet just with will force?

People should have the right to use the internet it as they please, but as we have so many tools for finding and consuming mindlessly everything on the web, and as they are pushed towards us so strongly, I think it's right to give more options to "opt out" as well, for one that desires so.

I think we lack tools for moderation, for making a clear distinction between "I wanna consume this kind of content" and "I wanna stay the hell away from all this bullshit".

It's cool there are alternatives: Lemmy it's really safer as a space compared to the hell that's Reddit now, but I think we should go deeper than that and decide to opt out better from what we don't like as well, and while the focus should still be "build a better internet/software/hardware", I think we need to address the problem of the difficulties on taking distances from certain aspects of the currently unavoidable and somewhat desperate situation of the web.

Linux has a clear solution for managing one's activity on a computer: users, groups and ownership. The admin has the complete freedom to shape the environment for the users. And it's very effective, once a directory belongs to another user, you cannot read it without the other user consent (or the admin). There's not so much space for bypassing this, and it's effective.

But what do we have for the web?

We can use browser extensions, proxies, DNS filtering, VPNs, blacklists etc etc.. But bypass methods for all this kind of filtering is one web search away. At least most of that.

I've come to a point that I am thinking only strict whitelisting with only the few sites you want could be good, blocking every other connection. But it still looks incredibly hard with so many sites using CDNs now, and so many new domains registered.

We should be able to have something like users on accessing web content too: Jane is allowed to visit streaming sites, Bob wanna focus on studying and can only visit cultural content. But I think categorization and moderation of an ever changing content, which is not as "static" as on an OS, needs ever changing moderating tools as well.

I think AI could have an amazing role in this, they can scrape content before we visit it and understand if it is suitable for our purpose. And since content always changes, ai will be always changing as well. I am also thinking that setting something like an "AI in the middle" in between https requests could have it's role, even if privacy concerns would need to be addressed. I like the concept of something like openai's omni-moderation API for example.

Projects like TOR focus on breaking censorship and that is amazing, but I don't see why a user shouldn't be able to censor some content reliably by himself if he wants to (on a client level).

I don't know, but I feel like a better control on how we use the web would be needed, if someone wanna have a talk/brainstorm I can make a Matrix room.

16
11
submitted 4 months ago by [email protected] to c/[email protected]
17
35
submitted 4 months ago by [email protected] to c/[email protected]

Not directly related to programming, but i was thinking it might be too technical for technolgy post.

18
8
submitted 4 months ago by [email protected] to c/[email protected]

Do you want to support our community? If any of the content found on our platform has contributed something to you, help us grow by participating in it, interacting with other programmers. Create your account today. https://chat-to.dev/ #programming #programmers #hacker #code

19
17
submitted 4 months ago by [email protected] to c/[email protected]

What are some programming languages that reuse most of the syntax of another (rather than just a few elements of it?)

20
38
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
21
8
submitted 4 months ago by [email protected] to c/[email protected]
22
14
submitted 5 months ago by [email protected] to c/[email protected]

Intent is to remember years later what specific tasks I had, even if I've left the company and no longer have access to my files. It's been very useful during interviews when asked about details of what I did, and in conversation with friends who want to know what my day to day is like. I've learned that this journal has to be kept in a personal space so that I won't lose access to it during layoffs, for example. Do you have any similar habits? What are your policies?

23
19
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]

Refactoring gets really bad reviews, but from where I'm sitting as a hobby programmer in relative ignorance it seems like it should be easier, because you could potentially reuse a lot of code. Can someone break it down for me?

I'm thinking of a situation where the code is ugly but still legible here. I completely understand that actual reverse engineering is harder than coding on a blank slate.

24
11
submitted 5 months ago by [email protected] to c/[email protected]

Know thyself

25
9
submitted 5 months ago by [email protected] to c/[email protected]
view more: next ›

Programming

13608 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS