this post was submitted on 17 Jul 2023
215 points (96.9% liked)

Actually Useful AI

1997 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Wanted to share a resource I stumbled on that I can't wait to try and integrate into my projects.

A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

top 22 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 28 points 1 year ago* (last edited 1 year ago)

I'm still waiting for a local autonomous AI agent with search. I don't understand why most autonomous agent projects use GPT-4 without incorporating search capabilities. Allowing the model to continuously hallucinate is not productive. Instead, it should be able to discover factual information and perform genuinely useful tasks.

[โ€“] [email protected] 28 points 1 year ago (2 children)

I've got this running. And it's fun!

But it's also bad compared to chatgpt, or even bing.

[โ€“] [email protected] 4 points 1 year ago (2 children)

Is there a free and better chatgpt alternative out there right now? I've gone through multiple and none are as good as chatgpt right now.

[โ€“] [email protected] 23 points 1 year ago (2 children)

I believe Claude 2 is the best LLM option currently, if you live in the US or UK or have a VPN.

[โ€“] [email protected] 1 points 1 year ago

Claude 2 isn't free though, is it?

Either way, it does depends on what you want to use it for. Claude 2 is very biased towards positivity and it can be like pulling teeth if you're asking it to generate anything it even remotely disapproves of. In that sense, Claude 1 is the superior option.

[โ€“] [email protected] 1 points 1 year ago

Iโ€™m using it from Canada without a VPN, just using the email login method (with an iCloud account if it matters).

[โ€“] [email protected] 1 points 1 year ago

Not that I know of.

[โ€“] [email protected] 1 points 1 year ago (1 children)

Yeah, if you use OpenAi's api key, it's cheaper and a bit more private than their website. I think it's like $0.1/day for ~100 queries

[โ€“] [email protected] 6 points 1 year ago (2 children)

I plugged GPT-4 into my discord bot.

It's $.03 for 1000 tokens. That translates to about 3 or 4 messages.

gpt-3.5-turbo is almost as good and way cheaper at .0015 per 1k tokens.

[โ€“] [email protected] 1 points 1 year ago

Three cents for every 1k prompt tokens. You pay another six cents per 1k generated tokens in addition to that.

At 8k context size, this adds up quickly. Depending on what you send, you can easily be out ~thirty cents per generation.

[โ€“] [email protected] 1 points 1 year ago

Depends on how you use context. If you don't need it to have memory, it's much, much cheaper

[โ€“] [email protected] 10 points 1 year ago

Im looking forward to foss ai solutions have their breakthrough, but for now, they cant compete with proprietary Software. Except maybe stable Diffusion

[โ€“] [email protected] 7 points 1 year ago

Worth noting that if you want a local LLM on android MLCChat can run Vicuna-7B, RedPajama and several other models from huggingface on fairly average hardware. The interface is still basic but it's functional.

[โ€“] [email protected] 5 points 1 year ago (2 children)

how do these compare to gpt3 and 4?

[โ€“] [email protected] 9 points 1 year ago

It's not very good. But that's the tradeoff for having full control of a local LLM right now.

[โ€“] [email protected] 7 points 1 year ago

In my experience gpt4all is both slower and less accurate but YMMV

[โ€“] [email protected] 4 points 1 year ago* (last edited 1 year ago)

i'll probably stick to ~~automatic1111~~ oobabooga (mixed up my tools) for now, seeing as they both seem to run the same models. certainly neat to see a more general userfriendly app tho

[โ€“] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Been playing around for a couple of weeks with it, and its local server option made it really easy to use with langchain + Orca mini is amazingly fast (but need proper prompts it seems - I still need to work this out it seems :D ) oh and it even lets you see the server side chat, reaaaaally useful when you chain prompts with langchain

[โ€“] [email protected] 1 points 1 year ago (1 children)

how does it compare to commercially available options? namely code generation, text summarization, and asking questions related to programming? I'm curious if they trained it on code.

[โ€“] [email protected] 2 points 1 year ago (1 children)

I'm really not able to answer this precisely, since i only used commercial alternatives to play around with it... what i can say is "Nous - Vicuna" model didnt feel worse than GPT 3.5 overall (and there's a dozen other models available), just a bit slower (which depends on your computer). And the GPT4ALL team curates their list of models, and it's really comfortable considering the million models happening everyday. Also the app that keeps getting new features. We also chose this system because self hosting is safer, in control, and free. Plus we try to only use the LLM where needed in our small project, so i'll be able to give more insight about that later I think, but overall it is more than usable.

[โ€“] [email protected] 2 points 1 year ago (1 children)

thanks for your insight. I, too, hope to come to a conclusion and share with the community once I have one formulated. Over the next month I hope to get something working

[โ€“] [email protected] 1 points 1 year ago

you're very welcome !

load more comments
view more: next โ€บ