1
3
2
10
3
23
submitted 5 days ago* (last edited 5 days ago) by SunlessGameStudios@lemmy.world to c/gamedev@programming.dev

It's just a part of the whole bee movie meme genre. Please let me know if you enjoyed it. Reddit sucks so I made my account to share here instead. I coded everything myself, and did all the assets, beyond what I get from cc0 sources.

4
44
submitted 6 days ago* (last edited 6 days ago) by Quokka@quokk.au to c/gamedev@programming.dev

Not sure how far I'll go with this but I'm having a lot of fun so far. Shoutout to https://opengameart.org/ for all my temporary assets.

So far I've got dialogue via Dialogue Manager, NPCs, factions, enemies with different weapons projectiles & hitscan, and 8 directional sprites. Not bad for 2 days work.

5
8

For context, I am creating a HOI4-style strategy game set in the Cold War. The battles will likely be turn-based rather than involving maneuvering troops on the map.

A large part of the Cold War involved various proxy conflicts between the two global superpowers, the Americans and the Soviets. If I make it do that the player is only able to intervene in these proxy conflicts, things like "mega-factions" (the sort you see in games like HOI4 that are a pain to deal with) would no longer be a problem, and it also means a lot less work for me to add all the different factions joining in to create a huge WWIII.

However, if I limit the player to only intervention in other countries, that would limit the alternate history scenarios the player can take, and it would also mean that many countries could effectively become NPCs. I could implement a civil war mechanic, where certain focus paths will lead you to a civil war between two or more factions within your country, and you could seek intervention from the major powers. This would help countries in Latin America, Africa, Asia, etc. to be more fun to play if there was no direct war mechanic.

Finally, if there was a direct war mechanic, how should the game react to the Americans and Soviets being in direct conflict? Should the game end once a nuclear weapon has been fired, indicating that nuclear annihilation has occurred?

6
14

https://codeberg.org/ZILtoid1991/pixelperfectengine

Originally the editor was a completely separate project, but due to massive architectural changes, it got left behind. So later I decided to put it into the engine's repository. Then I had the thought:

Why shouldn't the editor be a direct component of the engine? It would even allow in-game editing of levels in the game. Then it can be individually turned off

However this makes me give a few more dilemma. Should I just use the engine's newly added high-resolution overlay capabilities to show the windows on top of everything? Should it be a separate window? Should it be an option for both?

7
12

I'm not a game dev so please forgive me if this is the wrong place for this type of question but I'm looking for some resources to try and understand why games take so long to compile.

For context, I've worked with former game devs who've mentioned that builds can take anywhere from 4 to 6 hours to complete - even with a distributed architecture - depending on the hardware. That shit blew my mind. They said it has something to do with compiling shader permutations but didn't go into anyore detail. That said, I have a very primitive understanding of what shaders are but I mostly work with infrastructure and optimizing build systems.

Like I said, I'm not a game dev, im just curious. I appreciate any insight or resources you throw my way. Thanks!

8
34
submitted 1 week ago* (last edited 1 week ago) by Monstrosity@lemmy.today to c/gamedev@programming.dev

Great video with some tips I have not heard before concerning seemingly trivial decisions that cause serious issues as projects get larger.

The creator is an experienced programmer but seems brand new to making videos. That said, apart from an annoying number of calls for action ('leave a comment...'), it is well produced, especially for a first crack at the craft.

9
59

This is really good if you're wondering what a publishing company's contract might look like. They published it to give gamedevs an idea of what they should expect.

They're also celebrating funding 24 different games! I wonder when those will start coming out.

10
4

i spent some time this week building a small wordle clone and integrating it into my app.

the process was actually pretty fun. the main parts were building the word validation, handling the tile states (correct letter / wrong spot / not in word), and making sure the guesses update instantly for the each player. the UI took a bit of tweaking too so it feels responsive and not laggy when revealing the tiles.

the interesting part was fitting it into the rest of my app since it’s more of a social space with rooms and games (we already have mafia and spyfall). so the wordle game had to work cleanly alongside those without breaking the flow of the rooms.

it’s still pretty simple right now but it works well and people have started playing it.

if anyone is curious, you can check it out here: The Hideout

11
15
12
11
submitted 2 weeks ago* (last edited 2 weeks ago) by RougeEric@lemmy.zip to c/gamedev@programming.dev

A few years ago I decided to fix my biggest gripe with Unity's InputSystem: there is no intuitive and fuss-free way of determining which UI, character controller, popup, etc. should be receiving inputs at any given time.
Sure, the Action Maps are a great baseline for handling this since they let you assign a set of inputs to each given system; but you still have to make sure to enable and disable them at the correct moment. This can be easy in a small project, but when you have dozens of systems and UIs to contend with; it can get kind of messy.

So I started working on a system that sort of "automatically" handles all the mess for me and handles the complexity on its own.
After a few years of working on it when I felt like it in my spare time, I'm officially taking InputLayers out of beta:

You can get InputLayers for free on the asset store.

What is InputLayers?

The short version is that it's a system that lets you assign input actions to layers that "stack" priority. So when your popup comes up on screen, its layer is added to the top of the stack; and as long as no other layer takes its place, only inputs from that layer will be taken into account.
There's a bit more depth to all this, with layer priorities that prevent less "important" systems from taking over higher priority ones; but at its core; it basically lets you set things up using a single configuration window; and then never have to worry about if your character will keep moving when your main menu is open, or whatever other similar conflict you can imagine.

Video overview

I go over the core idea in a little bit more detail in this video: https://youtube.com/watch?v=bXEuzpbGlCI

Sample scenes

I've included a few sample scenes that cover most of the basic use cases. Their code is a bit complex if you're unfamiliar with UI Toolkit, but I've mostly isolated the fussy stuff so you can concentrate on understanding how the actual InputLayers stuff get handled.

Documentation

I've set up the documentation over on GitHub for ease of access; and so that people can post issues they may encounter easily.

13
24

Creating a good (and successful!) game is beyond challenging, especially in our trying times. What's been your experience with balancing different aspects of your own projects?

14
11
submitted 3 weeks ago* (last edited 3 weeks ago) by sbeak@sopuli.xyz to c/gamedev@programming.dev

I am creating a strategy game similar to Hearts of Iron IV that is set during the cold war. Each nation will have their own focus trees, and technologies need to be researched as well. The twist is, rather than maneuvering troops on the ground, war will instead be turn-based, like Pokemon battles. My issue now is I need to plan out how war will feel like, whether it's one large battle or many smaller battles.

Having a large battle makes things much more strategic as you pretty much have to play the long game and think far ahead, but the main downside is that it can be long, boring, and it would stop you from doing anything else for several in-game years. One potential solution for this is perhaps you could temporarily leave the battle screen to do other stuff, but you would become vulnerable if you don't manage your army (perhaps generals and such could be unlocked as you go along to automate some tasks? That would turn the game into one of those idle games though, and those usually aren't very fun)

Having many battles that pop up could be a good alternative, as then you could do your focuses, research, and stockpile equipment between battles. It also makes the battles less boring/tedious as each battle could be unique in some way with various different challenges, whether that's taking a fort on a hill or crossing a fast river. There would probably be a war score meter of some kind, that ticks up when you win battles and goes back when you lose them.

The problem with this one is that I'm not sure how to transition the player from non-battling to battling, it would probably involve the use of events, where you could either go on the offensive (gain a temporary attack bonus but become vulnerable if you don't succeed quickly), stand strong with defensive (gaining a temporary defense bonus), or retreat (losing war score but preserving your equipment stockpiles). This might be annoying as it would stop whatever the player is doing. Perhaps this could be solved with a ticking timer that begins ahead of an enemy attack (and you have to select/plan an option ahead of time, and if you're late your nation is considered unprepared for an attack by the enemy and you get negative modifiers)

I'm also unsure how involving other faction members, allies, etc would work. Should they be similar to Pokemon Double Battles where each nation gets to do something each turn, should it alternate (so nation 1 of faction goes first, then nation 2, 3 and so on and it loops around), should there be different "fronts" with different nations competing in them, etc. And how would I deal with the really large factions with 10+ members each (like NATO or Warsaw Pact)

15
9

Hey all,

the last weeks I have been working away at the Godot Object Compiler, a Unreal Header Tool-esque code generator for Godot GDExtensions.

It allows you to annotate classes and it's members and generates the necessary bindings to register properties, functions and signals to the Godot engine.

Internally it uses a tree-sitter parser and generates a simplified AST so the generators can query f.e. the field and parameter types and automatically create the correct variant type and property hints.

Here's an example how what that can look like:

#include "characters/chicken.generated.h"

GODOT_CLASS();
class Chicken : public CharacterBody3D {
	GODOT_GENERATED_BODY();

public:
	void _physics_process(double p_delta) override;

        GODOT_SIGNAL();
        void goc_goc(float p_volume);

	GODOT_FUNCTION(AnyPeer, CallRemote, Reliable);
	void jump_the_fence();

	GODOT_FUNCTION(ScriptVirtual);
	int pick_food(const Ref<Food>& food);

	GODOT_CATEGORY("Behaviour");
	GODOT_GROUP("Movement");

	GODOT_PROPERTY();
	float speed = 10.0f;

	GODOT_SUBGROUP("Jumping");

	GODOT_PROPERTY();
	float jump_height = 2.0f;

	GODOT_PROPERTY();
	Ref<Curve> jump_curve;

private:
	GODOT_PROPERTY();
	TypedArray<Food> food_in_belly;
};

GODOT_GENERATED_GLOBAL();

The available property hints, usages, variant types, base classes etc are also parsed from the linked godot-cpp headers so it won't break when something changes upstream (to an extent of course). I'm currently testing with the 4.5 branch.

It's still early days, but if you have same feedback I'd love to hear it :)

16
13
submitted 3 weeks ago* (last edited 3 weeks ago) by kcweller@feddit.nl to c/gamedev@programming.dev

Hey everybody,

I'm looking for a tool stack that is FOSS as much as possible, running on linux.

Currently:

  • 2D Art: Krita / GIMP / InkScape
  • 3D Modeling: Blender
  • Engine: Godot
  • Content creation: Kdenlive

All this works great, but I'm looking for a more general world editor.

I was watching this video from Blizcon 2016 and their editor, WowEdit, is just the dream. No way there is something like that right now.

What I'm mostly looking for is a tool where I can paint terrain, with a pen, like they do in the video. It needs to be able to export heightmaps and splatmaps. Do any of you have a good suggestion for this? I've looked at TerreSculptor, but that is mostly generation of heightmaps, which is cool in it's own right.

It's okay if it isn't FOSS, but being FOSS would be preferable. I try to support such projects anyway, so Free as in Libre, haha.

17
19
submitted 3 weeks ago by Wawe@lemmy.world to c/gamedev@programming.dev

cross-posted from: https://lemmy.world/post/43483174

At FOSDEM 2026, members of Element’s VoIP team - Robin Townsend, Timo Kandra and Valère Fédronic - presented a deep dive into the future of real time communication on Matrix.

Their talk gave an update on MatrixRTC, Matrix’s framework for bringing voice, video, and other live, interactive experiences directly into rooms. Their contributions to Matrix enable everything from large-scale calls and collaborative tools to multiplayer games, virtual worlds, and entirely new ways for people to interact in real time.

Watch the whole presentation Advancing real time communication on Matrix

We’ve been working on MatrixRTC as part of our work at Element, to build the foundations for large-scale, secure VoIP solution into matrix. This work is done by the same team that is behind Element Call. Element Call, the VoIP part of Element, sits on top of MatrixRTC.

Matrix has traditionally focused on persistent, asynchronous messaging. Real time communication (sub 100ms), however, introduces a very different set of requirements. It demands low latency, flexible participation, and should be ephemeral. At the same time, it must preserve Matrix’s core principles of decentralisation, federation, and security.

Historically, Matrix clients only supported 1:1 peer-to-peer WebRTC calls, using Matrix rooms primarily for call-oriented signalling and persisting what was effectively ephemeral state into room history. As calls grew larger and use cases expanded beyond simple voice and video, it became clear that real time communication needed first-class support in the Matrix protocol itself, with the flexibility to support multiple use cases beyond 1:1 calls.

MatrixRTC is our attempt to make real time applications native to Matrix, striking a balance between decentralisation and the practical demands of scalable, low-latency, end-to-end encrypted media and data exchange. Through concrete demos and implementation details, we showed how this approach enables entirely new classes of applications, including calls, games, virtual worlds and collaborative tools. Introducing slots for interactive rooms

MatrixRTC introduces the concept of slots, which allow room administrators to add real time communication features to their rooms. These slots can be anything from voice or video calls to 3D virtual worlds or multiplayer games. Each slot combines an application - which specifies the type of data participants exchange - and an identifier, allowing multiple parallel sessions in the same room.

The first application we’re adding to the specification is m.call, which covers basic voice and video calls. But third-party apps are fully supported, enabling developers to create custom experiences like virtual simulations or collaborative games.

Slots are managed via state events, ensuring that they are persistent, authorised, and can be moderated by room admins. When participants want to join a slot, they connect by sending membership events and publish their media over a chosen transport. A transport in this context is a WebRTC SFU (Selective Forwarding Unit). Currently supported is the LiveKit SFU. FullMesh WebRTC or a websocket solution are also considered while designing with MatrixRTC.

Other features, like delayed events, notifications, and encryption, are also part of the specification. For deeper technical details, past Matrix Conference talks and the Matrix Spec Proposals repository are excellent resources. Sticky events: reliable, ephemeral data delivery

It is essential to provide a good experience before joining a session. A client should be informed immediately via sync about any ongoing MatrixRTC session. To avoid polluting room state with this information, sticky events are introduced. These events are ephemeral, low-privilege, and can be encrypted. During a limited lifetime (for example around an hour), their delivery is guaranteed over sync. They are stored in the room timeline rather than in the authoritative room state that is subject to state resolution. Sticky events are ideal for real time session participation, letting clients join ongoing calls or sessions and immediately receive the necessary data, even if they missed some events earlier due to gappy-syncs (more on sticky events here). Transports: keeping real time communication decentralised

Each participant chooses their transport, which can be a home-server resource or a peer-to-peer connection. For Element Call, we currently use LiveKit media servers, which handle the heavy lifting of fan-out for media streams. This approach allows large calls to scale gracefully while keeping the system federated, decentralised and efficient. For example, in a typical office environment, many users converge on the same homeserver, minimising connections needed to participate in a large call. A MatrixRTC SDK for developers

With these protocol improvements, we also refactored our codebase, creating a MatrixRTC SDK that simplifies building real time applications. The SDK handles the complexities of connecting to multiple SFUs, authenticating with Matrix, managing sticky and delayed events, and exchanging media. Developers can now use this SDK to build applications, such as games or collaborative tools, without having to handle the underlying real time infrastructure directly.

For instance, we demonstrated a simple HTML template application using the Godot game engine, leveraging the MatrixRTC SDK. Through this setup, developers can access observable real time data to integrate it into games. The MatrixRTC SDK is used as an abstraction for core capabilities such as user identity and account setup, device verification, encryption (not shown in the demo), media connectivity via an SFU, and the existing Matrix backend infrastructure. Building on MatrixRTC: live demos

To showcase what’s possible, we built two multiplayer games using MatrixRTC. Players communicate over federated servers, exchange real time events, and interact seamlessly despite network variability. Although live demos sometimes face latency challenges, the system handles rollbacks and syncing to ensure a smooth experience.

We showed Godot-MatrixRTC-FlappyRoyal, a game similar to FlappyBird, and Godot-MatrixRTC-Keyboard-Kart, a racer-like multiplayer game.

Games and other applications can be run as widgets, providing an added layer of security. The trusted Client handles encryption and key management, so users never expose their full Matrix account or keys to external real time apps running inside the Widget.

Currently, the MatrixRTC SDK is available in JavaScript, but the widget architecture allows the low level Matrix responsibilities to be done in other SDK’s. The Matrix Rust SDK, for instance, supports the widget postmessage api. Widgets are based on iFrames. With Wasm becoming more mainstream, this opens the door for real time applications beyond the web stack, from Godot-based games to custom simulations.

MatrixRTC represents a significant step forward for Matrix, enabling decentralised, real time, and interactive experiences in rooms while maintaining the federated, secure principles of the ecosystem.

18
7

The reasons behind this rise of the latency is mainly that systems have become more and more complex and developers often don't know or don't understand each part that can impact the latency.

This website has been made to help developers and consumers better understand the latency issues and how to tackle them.

19
15
submitted 4 weeks ago by cm0002@lemy.lol to c/gamedev@programming.dev
20
4

I have been building a small browser multiplayer project that was text only. Mostly social and party type games.

Recently I added room wide voice chat using WebRTC. Everyone in the room can join the call if they want, or just stay in text.

What surprised me was how different everything felt.

Before voice Small rooms with 3 or 4 people felt kind of dead Conversations were slower Social deduction games felt less intense

After adding voice Even 3 people feels active Accusations hit way harder when you can hear hesitation People stay longer once they join voice Some users join just to listen

Nobody is forced to join voice, but once one or two people join, others usually follow.

Has anyone else added voice to a text based project? Did it change engagement for you?

link

21
28

After working on my weird shooter game for 5 years, I realized I'm never going to be finishing this project. In this video I explain why I've decided to quit my game and what is next.

22
27

The link is for Sonniss collection of royalty free music and sound effects that get released with each GDC.

Here's the more important part of the license, tldr you can use it on any number of commercial projects:

RIGHTS GRANTED

a) Licensee may use the licensed sound effects on an unlimited number of projects for the entirety of their life time.

b) Licensee may use and modify the licensed sound effects for personal and commercial projects without attribution to the original creator.

c) Licensee may publicly perform a reproduction of the sound effects over any form of medium.

d) Licensee may use the licensed sound effects for the purposes of synchronization with audio and visual projects the Licensee is involved with, which includes but is not limited to: games, films, television & interactive projects.

NO AI TRAINING OR USAGE

For clarity and avoidance of doubt, the Licensee is expressly prohibited from using any sound effects licensed under this Agreement for the purpose of training artificial intelligence technologies. This includes, but is not limited to, technologies capable of generating sound effects or works in a similar style or genre as the licensed sound effects. The Licensee shall not use, reproduce, or otherwise leverage the licensed sound effects in any manner for purposes of developing, training, or enhancing artificial intelligence technologies, nor sublicense these rights to any third party, without the Licensor’s specific and express written permission.

23
7
24
20

Any good strategy for solo indie marketing? Building a game is one thing, but getting eyeballs on it without a publisher or marketing budget feels like shouting into the void. What's actually worked for you?

25
12
submitted 1 month ago* (last edited 1 month ago) by nick_ocb@lemmy.world to c/gamedev@programming.dev

I've been working on Educational Family Games, a 4-player local co-op for families. The 'quick games' mode has 80 mini-games, and honestly? They took two years from first prototype to final polish.

Not because any individual game is complex, but because:

  • They need to work for kids (5+) AND adults
  • No elimination mechanics (everyone plays every round)
  • Has to hold up to 100+ plays without getting stale
  • Controller-handling edge cases you wouldn't believe

Full list with descriptions: https://www.crazysoft.gr/all/educational_family_games_quickgames.php

Curious—how long do your 'small' features actually take to get right?

view more: next ›

Game Development

5998 readers
19 users here now

Welcome to the game development community! This is a place to talk about and post anything related to the field of game development.

Community Wiki

founded 2 years ago
MODERATORS