Kissaki

joined 2 years ago
MODERATOR OF
[–] Kissaki@programming.dev 2 points 2 months ago* (last edited 2 months ago)

Notice: sr.ht is currently in alpha, and the quality of the service may reflect that.

Are these all different services? Seems like quite a hassle. Like a split of project resources.

An alpha classification doesn't spark confidence in using it productively and for significant projects.

[–] Kissaki@programming.dev 2 points 2 months ago (1 children)

Unfortunately, I find the need to have an account in order to contribute to projects a deal breaker. It causes too much friction for no real gain. Email based workflows will always reign supreme. It’s the OG of code contributions.

After opening with a need to be open-minded, this seems quite close-minded. Sure, it's their article. Still, I was hoping for a more neutral and substantiated advocating and description.

I certainly didn't feel like it answered [all] my questions and concerns in multiple sections.

[–] Kissaki@programming.dev 2 points 2 months ago

I somewhat like the idea of being able to submit issues via email directly. It does cost on spam classification and prevention, though. An account is easily classifiable as an additional confidence metric. E-Mail, not so much, or with significantly more complexity in relating data and ensuring continuity of source.

An account is a very obvious way to build a reputation. If you see a new GitHub account submitting a PR vs someone having contributed for a long time and significant projects in the same technology, you may approach the reviews quite differently. It is, at least, a very useful and simple way to classify authors and patch submitters.

What does SourceHut provide in this aspect? To what degree does it verify incoming emails authenticity, sender source, and continuity of source hoster? To what degree does it relate information by email address? I assume it does not.

[–] Kissaki@programming.dev 1 points 2 months ago* (last edited 2 months ago)

Additionally, the total size of "non-promoted" content, that is repositories that are for personal use (e.g. "my website", "my dotfiles") as well as private repositories, should not exceed 100 MiB

🤔 made me explore; there are no paid tiers, and the FAQ explains intentions:

In many cases, yes, but please read on. Our goal is to support Free Content, and we do not act as a private hosting for everyone! However, if we see that you contribute to Free Software / Content and the ecosystem, we allow up to 100 MB of private content for your convenience. Further exceptions are spelled out in our Terms of Service:


I've always seen Codeberg as a hosting platform much like GitHub and GitLab. But I see now it's a much more deliberate and specific effort and platform. And "personal use" [only] is not part of that.

[–] Kissaki@programming.dev 5 points 3 months ago* (last edited 3 months ago)

Supporting soft subs is a complex topic though. Three formats, font embedding, positioning and animations. It's a ton of effort, and anything less than "full featureset support" will mean they don't render how you design them in your full-set editor and local media play. And there will be differences and bugs, at least for a while. I suspect font rendering with various fonts in a media render context will have it's own set of issues.

I also think it'd be nice, but I can totally see how it may not make sense technically (complexity with its burdens vs need) or economically.

Browsers are already absurdly complex though so… maybe? :P

[–] Kissaki@programming.dev 0 points 3 months ago* (last edited 3 months ago)

RE: phabricator…I don’t know what that service is or is for, so I can’t comment if there’s any proof therein.

The how to submit a patch section documents that that's where they accept patches. And they do their reviews and change iterations there. By necessity, that also means hosting/having the repos.


That's confusing to me.

They only accept patches on Phabricator, have the sources there, but suggest using GitHub, but afterwards Phabricator to submit the changes?

I can only imagine it's to lower barrier to entry because GitHub is more well known. But this just seems like a confusing mess to me, without clear wording of intentions and separation of concerns [in their docs, not your post or comment here].

[–] Kissaki@programming.dev 8 points 3 months ago

These changes will apply to operations like cloning repositories over HTTPS, anonymously interacting with our REST APIs, and downloading files from raw.githubusercontent.com.

[–] Kissaki@programming.dev 73 points 3 months ago* (last edited 3 months ago) (3 children)

That's a read-only mirror, not a "move onto GitHub".

PRs get automatically closed, referring to the contrib docs.

[–] Kissaki@programming.dev 22 points 3 months ago

Lenard Flören, a Germany-based art director at an advertising agency, said he quickly realized that trying to create his dream fitness app with one lengthy prompt would lead to a plethora of bugs that “neither ChatGPT nor my clueless self had any chance of solving.”

If everyone can create programs, and everyone fails, maybe it'll bring increased appreciation to development and good development and products? One could hope. I guess the worst offenders won't even try themselves either way. The services are not that accessible.

[–] Kissaki@programming.dev 1 points 3 months ago* (last edited 3 months ago) (1 children)

, but it works reliably well. It takes a second or two to be redirected to the site you’re visiting.

Do you mean it works reliably well in letting users through, or in blocking AI?

Do you have sources or more information about the effectiveness of it in blocking AI? What else it blocks as collateral damage would also be interesting.

/edit: Clicking through some links (specifically canine.tools) I have to say - it may also be effective in annoying me personally, and eventually exiting those websites. Similar to consent dialogs you could go into settings for and save with opt-outs. But it's a barrier and user-opposing functionality.

I certainly don't see it as a simply or only good and effective thing.

[–] Kissaki@programming.dev 2 points 3 months ago* (last edited 3 months ago)

It doesn't open with a summary or overview but dives right in to exploration, but I think the point comes across:

The copy and paste key codes, which have no physical keys anymore, are - to a degree - supported in software. Their claim is that those key codes are the tool for universal copy and paste, and then it's the input interpretations job (key and combination mapping) to offer bindings to those key codes.

GTK added support the copy and paste keyboards in January 2025. QT also added support for copy and paste key codes the same month. I'm not sure of the first released version of the GTK toolkit that will contain the fix. For QT, it will be QT 6.10, scheduled for release in September 2025. Together, this will cover many apps built for Gnome and KDE as well as others that use the same toolkits.

… followed by some more "current state of support for those key codes".

 

Mapping C# array types to PostgreSQL array columns or other DBMS/DB JSON columns.

 

Available and enabled by default from version 17.11 Preview 2 onwards.

New resource explorer additionally supports search, single view across solution, edit multiple files and locales at once, dark mode, string.Format pattern validation, validation and warnings, combined string and media view, grid zoomability

 

cross-posted from: https://programming.dev/post/11720354

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

Backwards compatibility is a key principle in .NET, and this means that packages targeting previous .NET versions, like ‘net6.0’ or ‘net7.0’, are also compatible with ‘net8.0’. […]

The new “Include compatible frameworks” option we added allows you to flip between filtering by explicit asset frameworks and the larger set of ‘compatible’ frameworks. Filtering by packages’ compatible frameworks now reveals a much larger set of packages for you to choose from.

 

Truly astonishing how much generalized modding seems to be possible through general DirectX (8/9) interfaces and official Nvidia provided tooling.

As an AMD graphics card user, it's very unfortunate that RTX/this functionality is proprietary/exclusive Nvidia. The tooling at least. The produced results supposedly should work on other graphics cards too (I didn't find official/upstream docs about it).

For more technical details of how it works, see the GameWorks wiki:

 

cross-posted from: https://programming.dev/post/11034601

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

Describes considerations of convenience and security of auto-confirmation while entering a numeric PIN - which leads to information disclosure considerations.

An attacker can use this behavior to discover the length of the PIN: Try to sign in once with some initial guess like “all ones” and see how many ones can be entered before the system starts validating the PIN.

Is this a problem?

view more: ‹ prev next ›