this post was submitted on 03 Nov 2025
107 points (96.5% liked)

Programming

23348 readers
286 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the "good" ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I'm doing something in a particular way.

One example is creating an interface for every goddamn class I make because of "loose coupling" when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

you are viewing a single comment's thread
view the rest of the comments
[–] Feyd@programming.dev 55 points 1 day ago* (last edited 1 day ago) (7 children)

If it makes the code easier to maintain it's good. If it doesn't make the code easier to maintain it is bad.

Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

This might make sense for a library, but it doesn't make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it'll still be a lot less work than bloating the entire codebase with needless indirections every day.

[–] NigelFrobisher@aussie.zone 1 points 34 minutes ago

True. Open-closed principal is particularly applicable to library code, but a waste much of the time in a consuming application, where you will be modifying code much more.

[–] termaxima@slrpnk.net 1 points 11 hours ago

Getters and setters are superfluous in most cases, because you do not actually want to hide complexity from your users.

To use the usual trivial example : if you change your circle's circumference from a property to a function, I need to know ! You just replaced a memory access with some arithmetic ; depending in my behaviour as a user this could be either great or really bad for my performance.

[–] ExLisper@lemmy.curiana.net 1 points 15 hours ago

Exactly this. And to know what code is easy to maintain you have to see how couple of projects evolve over time. Your perspective on this changes as you gain experience.

[–] ugo@feddit.it 17 points 1 day ago* (last edited 1 day ago)

I call it mario driven development, because oh no! The princess is in a different castle.

You end up with seemingly no code doing any actual work.

You think you found the function that does the thing you want to debug? Nope, it defers to a different function, which calls a a method of an injected interface, which creates a different process calling into a virtual function, which loads a dll whose code lives in a different repo, which runs an async operation deferring the result to some unspecified later point.

And some of these layers silently catch exceptions eating the useful errors and replacing them with vague and useless ones.

[–] mr_satan@lemmy.zip 12 points 1 day ago

Yeah, this. Code for the problem you're solving now, think about the problems of the future.

Knowing OOP principles and patterns is just a tool. If you're driving nails you're fine with a hammer, if you're cooking an egg I doubt a hammer is necessary.

[–] Valmond@lemmy.world 5 points 1 day ago (2 children)

I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I'd gladly hear it!

[–] HetareKing@piefed.social 4 points 1 day ago (1 children)

If you're directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don't want to go confirm whether I remember correctly that long is the same size as int.

There's also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it's representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

I'd say that generally speaking, if it's not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it's not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

[–] Valmond@lemmy.world 0 points 1 day ago (4 children)

So we should not have #defines in the way, right?

Like INT32, instead of "int". I mean if you don't know the size you probably won't do network protocols or reading binary stuff anyways.

uint64_t is good IMO, a bit long (why the _t?) maybe, but it's not one of the atrocities I'm talking about where every project had its own defines.

[–] Feyd@programming.dev 3 points 1 day ago (1 children)

"int" can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn't always been the case, in which case you must roll your own. I'm sure some projects did it where it was unneeded, but when you have to do it you have to do it

[–] Valmond@lemmy.world 0 points 16 hours ago (1 children)

So show me two compatible systems where the int has different sizes.

This is folklore IMO, or incompatible anyways.

[–] Feyd@programming.dev 2 points 13 hours ago (1 children)

Incompatible? It is for cross platform code. Wtf are you even talking about

[–] Valmond@lemmy.world 1 points 12 hours ago (1 children)

Okay, then give me an example where this matters. If an int hasn't the same size, like on a Nintendo DS and Windows (wildly incompatible), I struggle to find a use case where it would help you out.

[–] Feyd@programming.dev 2 points 12 hours ago (1 children)

You can write code that is dependent on using a specific width of data type. You can compile code for different platforms. I have no idea what you're thinking when you say "wildly incompatible", but I guarantee you there is code that runs on both Nintendo DS and Windows.

[–] Valmond@lemmy.world 1 points 11 hours ago (2 children)

Well cite me one then. I mean there are super niche stuff that could theoretically need that, but 99.99% of software didn't, and now don't even more. IMO.

[–] entwine@programming.dev 3 points 8 hours ago (1 children)

Have you never heard of the concept of serialization? It's weird for you to bring up the Nintendo DS and not be familiar with that, as it's a very important topic in game development. Outside of game development, it's used a lot in network code. Even javascript has ArrayBuffer.

Well cite me one then

I've personally built small homebrew projects that run on both Nintendo DS and Windows/Linux. Is that really so hard to imagine? As long as you design proper abstractions, it's pretty straightforward.

Generally speaking, the best way to write optimal code is to understand your data first. You can't do that if you don't even know what format your data is in!

[–] Valmond@lemmy.world 0 points 8 hours ago (1 children)

What on earth did you run on a DS and windows? I'm curious! BTW we used hard coded in memory structures, not serialising stuff, you'd have a hard time doing that perfectly well on the DS IMO.

Still only a small homebrew project so IMO my point still stands.

As for understanding your data, you need to know the size of the int on your system to set up the infamous INT32 to begin with!

[–] entwine@programming.dev 3 points 8 hours ago (1 children)

What on earth did you run on a DS and windows? I’m curious!

A homebrew game, of course! Well, more like a game engine demo. Making game engines is more fun than making games.

I'm not sure why you find it so hard to believe, as it's pretty straight-forward to build a game on top of APIs like

void DrawRectangle(...);
void DrawSprite(...);

Then implement them differently on each target platform.

BTW we used hard coded in memory structures, not serialising stuff, you’d have a hard time doing that perfectly well on the DS IMO.

You mean embedded binary data? That's still serialization, except you're using the compiler as your serializer. Modern serialization frameworks usually have a DSL that mimics C struct declarations, and it's not a coincidence. Look up any zero-copy serialization tool and you'll find that they're all basically trying to accomplish the same thing: load a binary blob directly into a native C struct, but do it portably (which embedded binary data is not)

As for understanding your data, you need to know the size of the int on your system to set up the infamous INT32 to begin with!

Nah, that's what int32_t is for. The people who built the toolchain did that for me.

[–] Valmond@lemmy.world 1 points 7 hours ago (1 children)

Yeah that's how we did it, loading a "blob" into packed structs :-)

I'm with you with the int32_t, that's totally the way to go IMO, I guess my rant about #define INT32 got lost somewhere :-)

Actually got myself a job coding DS&Wii back in the day with my DS streaming tile engine (it is funnier to make engines), "use 64k tiles with the native 256 tile engine". I had a little demo where you wandered around and slayed skeletons Diablo 2 like, backpack and items included. Built with the unofficial retroengineered dev kit. Got my hands on the official docs after that!

Fun times.

[–] entwine@programming.dev 1 points 7 hours ago (1 children)

Actually got myself a job coding DS&Wii back in the day with my DS streaming tile engine

Damn that's sick. Landing a real job from homebrew work is the coolest backstory for a game developer. I've got a couple of hb projects I'm proud of, but in the world of Unity and Unreal I don't see it as being a particularly in-demand skill set.

...not that I'd want to work for a game dev company in 2025 lol

[–] Valmond@lemmy.world 1 points 6 hours ago

I did have a couple of years of gamedev under the belt, but only j2me java mobile games, so laying my hands on a nintendo dev kit was one of those one in a lifetime highs for me. Still get a tingle when I think about it ☺️.

You're right about todays landscape though 😑, between abusing A to AAA companies, dark patterns and microtransactions 🤢. Such a shame. I should get into indie games more but they all feel like they were made for unity/UE, so they all feel a bit the same (where are syrategy games, spinoffs off Worms, lemmings, ...). But maybe I'm missing out, there is so much rubbish to sift through.

Cheers!

[–] Feyd@programming.dev 2 points 11 hours ago (1 children)

I'm done spending time on this. If you are so insistent on being confidently incorrect then have at it.

[–] HetareKing@piefed.social 2 points 1 day ago

The standard type aliases like uint64_t weren't in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.

The use of #define to make type aliases never made sense to me. The earliest versions of C didn't have typedef, I guess, but that's like, the 1970s. Anyway, you wouldn't do it that way in modern C/C++.

[–] xthexder@l.sw0.com 2 points 1 day ago* (last edited 1 day ago)

I've seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.

[–] piccolo@sh.itjust.works 2 points 1 day ago

Iirc, _t is to denote a reserved standard type names.

[–] SilverShark@programming.dev 8 points 1 day ago (1 children)

We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

[–] Valmond@lemmy.world 4 points 1 day ago (2 children)

But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn't compiled to a 32 bit signed int too. There were also already long long and size_t... why make new ones?

Readability maybe?

[–] p_consti@lemmy.world 3 points 1 day ago (1 children)

Very often you need to choose a type based on the data it needs to hold. If you know you'll need to store numbers of a certain size, use an integer type that can actually hold it, don't make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

[–] Valmond@lemmy.world 1 points 1 day ago (1 children)

Show me one.

I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn't even make sense.

[–] p_consti@lemmy.world 3 points 1 day ago (1 children)

Basically anything low level. When you need a byte, you also don't use a int, you use a uint8_t (reminder that char is actually not defined to be signed or unsigned, "Plain char may be signed or unsigned; this depends on the compiler, the machine in use, and its operating system"). Any time you need to interact with another system, like hardware or networking, it is incredibly important to know how many bits the other side uses to avoid mismatching.

For purely the size of an int, the most famous example is the Ariane 5 Spaceship Launch, there an integer overflow crashed the space ship. OWASP (the Open Worldwide Application Security Project) lists integer overflows as a security concern, though not ranked very highly, since it only causes problems when combined with buffer accesses (using user input with some arithmetic operation that may overflow into unexpected ranges).

[–] Valmond@lemmy.world 0 points 16 hours ago

And the byte wasn't obliged to have 8 bits.

Nice example, but I'd say it'skind of niche 😁 makes me remember the underflow in a video game, making the most peaceful npc becoming a warmongering lunatic. But that would not have been helped because of defines.

[–] SilverShark@programming.dev 1 points 1 day ago

It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it's just easier.

[–] SandmanXC@lemmy.world 1 points 1 day ago

Really well said!