1
28
submitted 1 day ago by [email protected] to c/[email protected]
2
30
submitted 1 day ago by [email protected] to c/[email protected]
3
28
submitted 1 day ago by [email protected] to c/[email protected]
4
47
submitted 2 days ago by [email protected] to c/[email protected]
5
33
submitted 2 days ago by [email protected] to c/[email protected]
6
19
submitted 2 days ago by [email protected] to c/[email protected]
7
27
submitted 2 days ago by [email protected] to c/[email protected]
8
35
submitted 3 days ago by [email protected] to c/[email protected]
9
4
submitted 2 days ago by [email protected] to c/[email protected]

Instead of just generating the next response, it simulates entire conversation trees to find paths that achieve long-term goals.

How it works:

  • Generates multiple response candidates at each conversation state
  • Simulates how conversations might unfold down each branch (using the LLM to predict user responses)
  • Scores each trajectory on metrics like empathy, goal achievement, coherence
  • Uses MCTS with UCB1 to efficiently explore the most promising paths
  • Selects the response that leads to the best expected outcome

Limitations:

  • Scoring is done by the same LLM that generates responses
  • Branch pruning is naive - just threshold-based instead of something smarter like progressive widening
  • Memory usage grows with tree size, there currently no node recycling
10
13
submitted 3 days ago by [email protected] to c/[email protected]
11
11
submitted 3 days ago by [email protected] to c/[email protected]
12
11
submitted 3 days ago by [email protected] to c/[email protected]
13
22
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]
14
27
submitted 4 days ago by [email protected] to c/[email protected]
15
4
submitted 3 days ago by [email protected] to c/[email protected]
16
11
submitted 4 days ago by [email protected] to c/[email protected]
17
18
submitted 5 days ago by [email protected] to c/[email protected]
18
34
submitted 5 days ago by [email protected] to c/[email protected]

Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.

The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way--whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like "the user is very astute" and it feels good to read that as someone who is socially isolated and is never complimented because of that.

I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.

19
19
submitted 5 days ago by [email protected] to c/[email protected]
20
13
submitted 5 days ago by [email protected] to c/[email protected]
21
5
submitted 4 days ago by [email protected] to c/[email protected]
22
17
submitted 5 days ago by [email protected] to c/[email protected]
23
4
submitted 4 days ago by [email protected] to c/[email protected]

This paper introduces DiffuCoder, a 7B-scale open-source masked diffusion large language model (dLLM) specifically designed for code generation.

The research provides insights into how dLLMs generate content, distinguishing their decoding behavior from that of autoregressive (AR) models. Unlike AR models, dLLMs can intrinsically adjust their generation causality and increasing sampling temperature diversifies not just token choices but also their generation order, creating a rich search space for reinforcement learning (RL).

This flexibility allows dLLMs to be more non-autoregressive and generate tokens in a less sequential, more "human-like" code writing manner.

To leverage this diversity and improve performance, the paper proposes coupled-GRPO RL algorithm. This method utilizes a coupled-sampling scheme that constructs complementary mask noise during training to reduce the variance of token log-likelihood estimates while maintaining training efficiency.

Experimentally, coupled-GRPO significantly boosts DiffuCoder's performance on code generation benchmarks, notably improving EvalPlus scores by 4.4% with training on only 21K samples. The research also shows that coupled-GRPO trained models experience a smaller performance drop when decoding steps are halved (resulting in a 2x speedup), indicating increased parallelism and reduced reliance on AR bias during decoding.

available at https://huggingface.co/apple/DiffuCoder-7B-cpGRPO

24
33
submitted 6 days ago by [email protected] to c/[email protected]
25
4
submitted 4 days ago by [email protected] to c/[email protected]

In modern LLM applications like RAG and Agents, the model is constantly fed new context. For example, in RAG, we retrieve relevant documents and stuff them into the prompt.

The issue is that this dynamically retrieved context doesn't always appear at the beginning of the input sequence. Traditional KV caching only reuses a "common prefix," so if the new information isn't at the very start, the cache hit rate plummets, and your GPU ends up recomputing the same things over and over.

CacheBlend changes the game by allowing for the reuse of pre-computed KV caches regardless of their position in the input sequence.

This makes it possible to achieve a 100% KV Cache hit rate in applications like RAG. The performance gains are significant:

  • Faster Time-To-First-Token (TTFT): Get your initial response much quicker.
  • More Throughput: Serve significantly more users with the same hardware.
  • Almost lossless Output Quality: All of this is achieved with little degradation in the model's generation quality.

CacheBlend works by intelligently handling the two main challenges of reusing non-prefix caches:

  • Positional Encoding Update: It efficiently updates positional encodings to ensure the model always knows the correct position of each token, even when we're stitching together cached and new data.
  • Selective Attention Recalculation: Instead of recomputing everything, it strategically recalculates only the minimal cross-attention needed between the new and cached chunks to maintain perfect generation quality.

An interactive CacheBlend demo is available at: https://github.com/LMCache/LMCache-Examples/tree/main/demo-rag-blending

view more: next ›

Technology

1150 readers
100 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS