thingsiplay

joined 2 years ago
[–] thingsiplay@beehaw.org 4 points 5 days ago (2 children)

As it specifies the version number, I assume it was not compatible with version 2.9 of Winamp skins maybe? I mean there was this classic Winamp skin system and the modern one later. I don't use or keep track of the changelog for Audacious, so this is me more asking and shooting with guns while being blind.

[–] thingsiplay@beehaw.org 0 points 5 days ago

What is the longest time Brachefs users can use? I mean the Linux LTS version with Bcachefs included that is the longest supported. Or are there distributions which are known to include a patched Linux kernel with support for Bcachefs?

[–] thingsiplay@beehaw.org 1 points 5 days ago (2 children)

Yes, they are reverting back. Fedora users always live on the edge. They are basically (but not quite right) "always" the first accepting a new technology. Not even Archlinux does that. Arch users obviously live on the edge too, but for other reasons. :D

But wasn't Fedora not going to discontinue X11 support only for GNOME version? I thought other spins are still allowed to support it, but doesn't matter anymore, because they reverting this idea back. I think. But why didn't you switch to another distribution, instead buying new hardware, if that was the only problem?

[–] thingsiplay@beehaw.org 4 points 5 days ago

A dash is a bit problematic from practical point of view. In example I allow single numbers without a colon like just 6 which would be interpreted as 6:6. And each element is optional as well, which would make -6 either be a negative number, an commandline option or a range? Some languages also use dots .. instead. If I want ever support negative numbers, then the hypen, dash or minus character would be in the way.

I mean I could just do a duck typing like stuff, where I accept "any" non digit character (maybe except the minus and plus characters) with regex. Hell even a space could be used... But I think in general a standardized character is the better option for something like this. Because from practical point of view, there is no real benefit for the end user using a different character in my opinion. Initially I even thought about what format to use and a colon is pretty much set in stone for me.

[–] thingsiplay@beehaw.org 5 points 5 days ago (10 children)

Fedora even switched to Wayland by default in 2016 (at least for the GNOME release). I don't know what they were thinking. 8 to 9 years before they were already using Wayland... and it still have some "problems". Can't imagine what you were going through. :D

But compared to Fedora, Ubuntu only did change temporarily to Wayland right? I mean it was not an LTS version. I installed LTS 18.04 and don't remember anything like that by default.

[–] thingsiplay@beehaw.org 5 points 5 days ago (1 children)

Those who don't care, don't have anything to say and should not the deciding factor. Why count voices who don't care?

[–] thingsiplay@beehaw.org 9 points 5 days ago (12 children)

Who "promoted" it as superior to X11? Pretty much everyone I watch and read said that Wayland had their problems and they are working on it, but it is the future. There are ideas and concepts that are superior to X11, but it does not mean its fleshed out. I don't think anyone said that Wayland is superior to X11 in every aspect. Not even the most die hard fan say it. :D

[–] thingsiplay@beehaw.org 22 points 5 days ago (2 children)

It doesn't need to be. The goal is not to recreate and be compatible with X11, otherwise it would defeat the idea to create something new. Wayland is here, because it needs to do things differently. It's the same as Linux operating systems will never be ready for every Microsoft user. And that's okay.

[–] thingsiplay@beehaw.org 6 points 5 days ago

Nothing changes much, its just the elements are all from top to down now and wider. I liked the old one more, where I had to less scroll. This new layout is more smartphone focused with vertical layout, while I use my big pc screen with horizontal layout. It's just not good. The only positive side is, it looks less cluttered and it is straightforward.

[–] thingsiplay@beehaw.org 12 points 5 days ago

I believe it, if I see it with my own eyes.

[–] thingsiplay@beehaw.org 2 points 5 days ago* (last edited 5 days ago) (1 children)

I think that I'm going with these approaches. For the '0', I'm now accepting it as the 0 element. Which is not 0 based index, but it really means before the first element. So any slice with an END of 0 is always nothing. Anything that starts at 0 will basically give you as many elements as END points to.

  • 0: is equivalent to : and 1: (meaning everything)
  • 0 is equivalent to 0:0 and :0 (meaning empty)
  • 1:0 still empty, because it starts after it ended, which reads like "start by 1, give me 0 elements"
  • 1:1 gives one element, the first, which reads like "start by 1, give me 1 element"

I feel confident about this solution. And thanks for everyone here, this was really what I needed. After trying it out in the test data I have, I personally like this model. This isn't anything surprising, right?

[–] thingsiplay@beehaw.org 1 points 5 days ago (1 children)

Now that you ask, I don't have any example of this. I know program head has negative numbers to access from the last element backwards ls -1 | head -n -1, but it does not start by 0. So yeah, the 0 as last element might be not as common as I thought to be.

 

I'm currently writing a CLI tool that handles a specific JSON data format. And I also want to give the user to get a slice of the item array of the file. It's a slice in form of --slice START:END through commandline options. So in example --slice 1:2.

  1. Should I provide a 0 based index for the access or a 1 based index? In example --slice 1:2 with 0 based index would start with the second element and with 1 based index it would start with the first element.
  2. And would you think its better to have the END to be inclusive or exclusive? In example --slice 1:2 would get only one element if its exclusive or it gets two elements if its inclusive.

I know this is all personal taste, but I'm currently just torn between all options and cannot decide. And thought to ask you what you think. Maybe that helps me sorting my own thoughts a bit. Thanks in advance.

 

I just discovered a screen recording software in Flathub using the GPU efficiently that works great out of the box on on Wayland, even the hotkeys.

Alternative Video Recorder I use tooOBS, Spectacle, Steam, RetroArch

I also have OBS setup, but that is more suited for a workflow that does not change much in my opinion. I don't know, maybe I'm wrong here with that. But at least the hotkeys do not work for me on OBS. The GPU Screen Recorder is a bit easier to setup and understand too. For Steam games I do not need this and use the Steam builtin functionality already. RetroArch for emulation of games is problematic, so this tool comes in handy. And Spectacle from the KDE software has some video recording functionality too, but I didn't got into much yet.

Actually, GPU Screen Recorder is a CLI tool that can easily be automated with scripts. I did not try that yet. The Flatpak version comes with a GUI (GTK) and has a new alternative GUI that resembles the Nvidia Shadowplay look (and looks the same).

I use the Desktop Portal, which will ask me to record a window or application instead the entire screen (but can do that too). It does not require root access for that.

 

The first 7 minutes segment explains it. Its kinda self advertisement, but I think this is important. One of my favorite Gaming YouTube channels "Skill Up" launched a new website for gaming articles. The goal is to have articles without Ai, no advertisements, no sponsored articles, no CEO optimized content, to maintain a high quality content. I think this is really really important and a good step.

 

Example script: https://gist.github.com/thingsiplay/ae9a26322cd5830e52b036ab411afd1f

Hi all. I just wanted to share a way to handle a so called advanced help menu, where additional options are listed that are otherwise hidden with regular help. Hidden options should still function. This is just to have less clutter in normal view.

I've researched the web to see how people does it, and this is the way I like most so far. If you think this is problematic, please share your thoughts. This is for a commandline terminal application, that could also be automated through a script.

How it works on a high level

Before the ArgumentParser() is called, we check the sys.argv for the trigger option --advanced-help. Depending on this we set a variable to true or false. Then with the setup of the parser after the ArgumenParser() call, we add the --advanced-help option to the list of regular help.

advanced_help = False
for arg in sys.argv:
    if arg == "--":
        break
    if arg == "--advanced-help":
        advanced_help = True

parser = argparse.ArgumentParser()

Continue setting up your options as usual. But for the help description of those you want to exclude when using just regular -h, add an inline if else statement (ternary statement). This statement will put the help description only if advanced_help variable is true, otherwise it puts argparse.SUPPRESS to hide the option. Do this with all the options you want to hide.

parser.add_argument(
    "-c",
    "--count",
    action="store_true",
    default=False,
    help="print only a count of matching items per list, output file unaffected"
    if advanced_help
    else argparse.SUPPRESS,
)

At last we need to actually parse what you just setup. For this we need to assign our custom list, that is based on the sys.argv, plus the regular --help option. This way we can use --advanced-help without the need for -h or --help in addition to show any help message.

if advanced_help:
    args = parser.parse_args(sys.argv[0:0] + ["--help"] + sys.argv[1:])
else:
    args = parser.parse_args()

Run following program once with ./thing.py -h and ./thing.py --advanced-help.

 

Watch on YouTube: https://youtu.be/_Pqfjer8-O4

Watch on SkipVids: https://skipvids.com/?v=_Pqfjer8-O4 (watch YouTube without using YouTube directly, and without ads)

Video Description:


Inside your smartphone, there are billions of transistors, but have you ever wondered how they actually work and how they can be combined to perform tasks like multiplying two numbers together? One rather interesting thing is that transistors are a lot like Lego Bricks assembled together to build a massive Lego set, which we’ll explore further. In this video, we dive into the nanoscopic world of transistors. First, we'll see how an individual transistor works, then we’ll see how they are connected together and organized into logic gates such as an inverter or an AND gate. Finally, we’ll see how logic gates are connected together into large Macrocells capable of performing arithmetic.

Table of Contents:

00:00 - Inside your Desktop Computer
00:26 - Transistors are like Lego Pieces
01:09 - Lego Bricks vs Transistors and Standard Cells
02:12 - Examining the Inverter Standard Cell 
03:24 - How do Basic Transistors work?
09:09 - Schematic for an Inverter Standard Cell
10:45 - Exploring the Macrocell 
13:20 - Conceptualizing how a CPU Works
15:11 - Brilliant Sponsorship
16:55 - The NAND Standard Cell
20:35 - A Surprisingly Hard Script to Write 
21:42 - The AND Standard Cell
23:16 - The Exclusive OR Standard Cell
23:54 - CMOS Circuit
24:27 - Understanding Picoseconds
25:51 - Special Thank You and Outro  
 

I desperately need some Python help. In short, i want to use multiple keys at once for sorting a dictionary. I have a list of keys and don't know how to convert it to the required list.

This is a single key. The self.items is a list of dictionaries, where d[key] is resolved to the actual key name such as "core_name", that the list is then sorted as. This works as expected for single sort, but not for multiple.

key = "core_name"
self.items = sorted(self.items, key=lambda d: d[key])
key = "label"
self.items = sorted(self.items, key=lambda d: d[key])

Problem is, sorting it multiple times gives me wrong results. The keys need to be called in one go. I can do that manually like this:

self.items = sorted(self.items, key=lambda d: (d["core_name"], d["label"]))

But need it programmatically to assign a list of keys. The following does not work (obviously). I don't know how to convert this into the required form:

# Not working!
keys = ["core_name", "label"]
self.items = sorted(self.items, key=lambda d: d[keys])

I somehow need something like a map function I guess? Something that d[keys] is replaced by "convert each key in keys into a list of d[key]". This is needed inside the lambda, because the key/value pair is dynamically read from self.items.

Is it understandable what I try to do? Has anyone an idea?


Edit: Solution by Fred: https://beehaw.org/post/20656674/4826725

Just use comprehension and create a tuple in place: sorted(items, key=lambda d: tuple(d[k] for k in keys))

 

Direct link to the image in the browser: https://cosmos2025.iap.fr/fitsmap/?ra=150.1203188&dec=2.1880050&zoom=2

Article copied:


In the name of open science, the multinational scientific collaboration COSMOS on Thursday has released the data behind the largest map of the universe. Called the COSMOS-Web field, the project, with data collected by the James Webb Space Telescope (JWST), consists of all the imaging and a catalog of nearly 800,000 galaxies spanning nearly all of cosmic time. And it’s been challenging existing notions of the infant universe.

“Our goal was to construct this deep field of space on a physical scale that far exceeded anything that had been done before,” said UC Santa Barbara physics professor Caitlin Casey, who co-leads the COSMOS collaboration with Jeyhan Kartaltepe of the Rochester Institute of Technology. “If you had a printout of the Hubble Ultra Deep Field on a standard piece of paper,” she said, referring to the iconic view of nearly 10,000 galaxies released by NASA in 2004, “our image would be slightly larger than a 13-foot by 13-foot-wide mural, at the same depth. So it’s really strikingly large.” An animated zoom-out from the center of the COSMOS-Web field to a full-size comparison between COSMOS-Web and the Hubble Ultra Deep Field

The COSMOS-Web composite image reaches back about 13.5 billion years; according to NASA, the universe is about 13.8 billion years old, give or take one hundred million years. That covers about 98% of all cosmic time. The objective for the researchers was not just to see some of the most interesting galaxies at the beginning of time but also to see the wider view of cosmic environments that existed during the early universe, during the formation of the first stars, galaxies and black holes.

“The cosmos is organized in dense regions and voids,” Casey explained. “And we wanted to go beyond finding the most distant galaxies; we wanted to get that broader context of where they lived.” A 'big surprise'

And what a cosmic neighborhood it turned out to be. Before JWST turned on, Casey said, she and fellow astronomers made their best predictions about how many more galaxies the space telescope would be able to see, given its 6.5 meter (21 foot) diameter light-collecting primary mirror, about six times larger than Hubble’s 2.4 meter (7 foot, 10 in) diameter mirror. The best measurements from Hubble suggested that galaxies within the first 500 million years would be incredibly rare, she said.

“It makes sense — the Big Bang happens and things take time to gravitationally collapse and form, and for stars to turn on. There’s a timescale associated with that,” Casey explained. “And the big surprise is that with JWST, we see roughly 10 times more galaxies than expected at these incredible distances. We’re also seeing supermassive black holes that are not even visible with Hubble.” And they’re not just seeing more, they’re seeing different types of galaxies and black holes, she added.

“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen." 

'Lots of unanswered questions'

While the COSMOS-Web images and catalog answer many questions astronomers have had about the early universe, they also spark more questions.

“Since the telescope turned on we’ve been wondering ‘Are these JWST datasets breaking the cosmological model? Because the universe was producing too much light too early; it had only about 400 million years to form something like a billion solar masses of stars. We just do not know how to make that happen,” Casey said. “So, lots of details to unpack, and lots of unanswered questions.”

In releasing the data to the public, the hope is that other astronomers from all over the world will use it to, among other things, further refine our understanding of how the early universe was populated and how everything evolved to the present day. The dataset may also provide clues to other outstanding mysteries of the cosmos, such as dark matter and physics of the early universe that may be different from what we know today.

“A big part of this project is the democratization of science and making tools and data from the best telescopes accessible to the broader community,” Casey said. The data was made public almost immediately after it was gathered, but only in its raw form, useful only to those with the specialized technical knowledge and the supercomputer access to process and interpret it. The COSMOS collaboration has worked tirelessly for the past two years to convert raw data into broadly usable images and catalogs. In creating these products and releasing them, the researchers hope that even undergraduate astronomers could dig into the material and learn something new.

“Because the best science is really done when everyone thinks about the same data set differently,” Casey said. “It’s not just for one group of people to figure out the mysteries.” Image Caitlin Casey wears a puffy coat in front of a lake Photo Credit Courtesy Photo Caitlin Casey

Caitlin Casey is an observational astronomer with expertise in high-redshift galaxies. She uses the most massive and unusual galaxies at early times to test fundamental properties of galaxy assembly (including their gas, stars, and dust) within a ΛCDM cosmological framework. Read more

For the COSMOS collaboration, the exploration continues. They’ve headed back to the deep field to further map and study it.

“We have more data collection coming up,” she said. “We think we have identified the earliest galaxies in the image, but we need to verify that.” To do so, they’ll be using spectroscopy, which breaks up light from galaxies into a prism, to confirm the distance of these sources (more distant = older). “As a byproduct,” Casey added, “we’ll get to understand the interstellar chemistry in these systems through tracing nitrogen, carbon and oxygen. There’s a lot left to learn and we’re just beginning to scratch the surface.”

The COSMOS-Web image is available to browse interactively ; the accompanying scientific papers have been submitted to the Astrophysical Journal and Astronomy & Astrophysics.

 

cross-posted from: https://beehaw.org/post/20234081

2 days ago I made a post that the game would not run on a Linux desktop PC (but it would on the Steam Deck). 10 hours ago they released an update that resolves this issue and makes the game run through Proton on a Linux desktop PC.

- The Beta now supports players on Linux thru Proton

I can confirm it does run and I just did the short tutorial. I still have to play more, but wanted to inform anyone who is interested into the game.

 

I want to share some thoughts that I had recently about YouTube spam comments. We all know these early bots in the YouTube comment section, with those "misleading" profile pictures and obvious bot like comments. Those comments are often either random about any topic or copied from other users.

OK, why am I telling you that? Well, I think these bots are there to be recognized as bots. Their job is to be seen as a bot and be deleted and ignored. In that case everyone feels safe, thinking all bots are now deleted. But in reality there are more sophisticated bots under us. So the easy bots job is to get delete and basically mislead us, so we don't think that any is left, because they are deleted.

What do you think? Sounds plausible, doesn't it? Or do I have paranoia? :D

 

Video description:


In this video, we'll talk about NVIDIA's last several months of pressure to talk about DLSS more frequently in reviews, plus MFG 4X pressure from the company. NVIDIA has repeatedly made comments to GN that interviews, technical discussion, and access to engineers unrelated to MFG 4X and DLSS are made possible by talking about MFG 4X and DLSS. NVIDIA has explicitly stated that this type of content is made "possible" by benchmarking MFG 4X in reviews specifically, despite us separately and independently covering it in other videos, and has made repeated attempts to get multiplied framerate numbers into its benchmark charts. We will not play those games. In the time since, NVIDIA has offered certain unqualified media outlets access to drivers which actual qualified reviewers do not have access to, but allegedly only under the premise of publishing "previews" of the RTX 5060 in advance of its launch. Some outlets were given access to drivers specifically to publish what we believe are puff pieces and marketing while reviewers were blocked.

TIMESTAMPS

00:00 - Giving Access, Then Threatening It
04:29 - Quid Pro Quo
08:28 - Social Manipulation
09:44 - It's Never Good Enough for NVIDIA
12:08 - NVIDIA is Vindictive
14:28 - Stevescrimination
17:38 - Not The First Time
19:00 - Gamers Are Entitled
 

https://github.com/thingsiplay/crc32sum

# usage: crc32sum [-h] [-r] [-i] [-u] [--version] [path ...]

crc32sum *.sfc
2d206bf7  Chrono Trigger (USA).sfc

Previously I used a Bash script to filter out the checksum from 7z output. That felt always a bit hacky and the output was not very flexible. Plus the Python script does not rely on any external module or program too. Also the underlying 7z program call would automatically search for all files in sub directories recursively when a directory was given as input. This would require some additional rework, but I decided it is a better idea to start from scratch in a programming language. So I finally wrote this, to have a bit better control. My previous Bash script can be found here, in case you are curious: https://gist.github.com/thingsiplay/5f07e82ec4138581c6802907c74d4759

BTW, believe it or not, the Bash script running multiple commands starts and executes faster than the Python instance. But the difference is negligible, and the programmable control in Python is much more important to me.


What is this program for?

Calculates the CRC hash for each given file, using Python's integrated zlib module. It has a similar use like MD5 or SHA, but is way, way weaker and simpler. It's a quick and easy method to verify the integrity of files, in example after downloading from the web, to check data corruption from your external drives or when creating expected files.

It is important to know and understand that CRC-32 is not secure and should never be used cryptographically. It's use is limited for very simple use cases.

Linux does not have a standard program to calculate the CRC. This is a very simple program to have a similar output like md5sum offers by default. Why use CRC at all? Usually and most of the time CRC is not required to be used. In fact, I favor MD5 or SHA when possible. But sometimes, only a CRC is provided (often used by the retro emulation gaming scene). Theoretically CRC should also be faster than the other methods, but no performance comparison has been made (frankly the difference doesn't matter to me).

 

Marathon looks like an Ai agent would create. Art style, gameplay and story wise.

This is the next game from the Destiny creator Bungie. A multiplayer extraction shooter. It has nothing to do with the original Marathon game its based on, an old single player game. Those who could hands on the game describe it as a Destiny like controls and animation, but as an extraction shooter mode.

As for me, I would probably even check the game out, if it was free to play (its full price game, like Concord) and if it would be playable on Linux. Bungie is anti Linux, so not for me anyway.

view more: next ›