max

joined 2 years ago
MODERATOR OF
[–] max@nano.garden 1 points 2 years ago

Ahh, sorry I had missed this comment! But, if you do get this done send it to me for payment!

2
submitted 2 years ago* (last edited 2 years ago) by max@nano.garden to c/community_projects@nano.garden
 

A new tool has been released to monitor the speed of transactions in real time.

The current maximum confirmations per second was measured during one of the daily speed tests this past week at 416 confirmations per second!

 

Ouch.

The swindler “airdropped” the fake address into the DEA’s account by dropping a token into the DEA account so it looked like the test payment made to the Marshals. The idea here was to basically trick the DEA into thinking the scammer’s address was actually the Marshal’s service’s address. Crypto addresses are so long that people usually just copy and paste instead of typing them fresh each time. Airdropping is a legitimate feature in cryptocurrency and sees an individual or entity drop tokens representing a certain value of a currency into someone’s account. It’s normally done as part of a launch of a new kind of token, but it’s also been abused by those seeking to dupe crypto owners into scams like this.

[–] max@nano.garden 2 points 2 years ago (1 children)

But then there is also the question if you trust github (and because of that microsoft, but also the USA because of laws) with always building from the sources, and adding nothing more.

Yesterday I would have said 'blah, they would not care about my particular small project'. But since then I read the paper recommended by a user in this post about building a compromised compiler that would installs a back-door to a type of login field. I now think it is not so crazy to think that intelligence agencies might collude with Microsoft to insert specific back-doors that somehow allows them to break privacy-related protocols or even recover private keys. Many of these might rely on a specific fundamental principle and so this could be recognized and exploited by a compiler. I came here for a practical answer to a simple practical situation, but I have learned a lot extra 😁

[–] max@nano.garden 4 points 2 years ago (1 children)

No, I'm not concerned about a lawsuit. It's something that I want to do because I think that it is important. If I want to share tools with non-tech savvy people who are unable to build them from source, I want to be able to share these without anyone needing to "trust" me. The reproducible builds standards are a very nice idea, and I will learn how to implement them.

But I still wonder whether my approach is valid or not - is printing the hash of the output executable during Github's build process, such that it is visible in the workflow logs, very strong evidence that the executable in the release with the same hash was built by github through the transparent build process? Or is there a way a regular user would be able to fake these logs?

[–] max@nano.garden 1 points 2 years ago (3 children)

But, if during Github's build process the sha156sum of the output binary is printed, and the hash matches what is in the release, isn't this enough to demonstrate that the binary in the release is the binary built during the workflow?

[–] max@nano.garden 2 points 2 years ago

Ooh, I did not know this one was of the properties of Rust.

[–] max@nano.garden 2 points 2 years ago* (last edited 2 years ago) (2 children)

Thanks! I am convinced now, I will learn how to create reproducible builds.

My worry is that the build is run through npm, and I think that the dependencies rely on additional dependencies such as openssl libraries. I worry that it will be a lot of work to figure out what every npm dependency is, what libraries they depend on, and how to make sure that the correct versions can be installed and linked by someone trying to reproduce the build 10 years from now. So it looks like a difficult project, but I will read more about it and hopefully it is not as complicated as it looks!

11
submitted 2 years ago* (last edited 2 years ago) by max@nano.garden to c/hacking@lemmy.ml
 

The linked paper was pointed out to me during a discussion about trusting executables built from source. Perhaps this paper is a well-known document in the hacking community, but I thought it was quite interesting and thought I'd share it.

The document describes how the author created a bugged C compiler that would compile UNIX code in which the "login" command would insert a backdoor.

The actual bug I planted in the compiler would match code in the UNIX "login" command. The re- placement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user.

The author also describes strategies to build such bugged compiler in a way that would be very difficult to detect.

The document ends with a moral statement about hacking with a perspective from 1984 which is also an interesting read.

[–] max@nano.garden 5 points 2 years ago* (last edited 2 years ago) (1 children)

Ooh, I think I found the paper!

Oof:

The actual bug I planted in the compiler would match code in the UNIX "login" command. The re- placement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user

[–] max@nano.garden 9 points 2 years ago

My new phone runs GrapheneOS and I love it.

One recommendation that I would give people is that it does not need to be an all-or-nothing jump into the abyss. It can be a bit disheartening when you try to get rid of all the privacy-invasive things in your life and you get cut off from your family and friends.

After some failed attempts, the strategy that I have found more successful is that I have new phone that I installed GrapheneOS into, and I keep the older phone with whatsapp. The older phone is in Airplane mode connected to WiFi at my home. It is effectively a landline. I can still use it once or twice a day to check on my family through WhatsApp without having to broadcast my location all day to Meta. This way I don't need to install any sandboxed Google Play services into my new phone. The old phone is the sandboxed Google Play. I also use the old phone for verifications, 2FA, and any other things that I don't want to contaminate my new phone with.

Over time I am finding that my GrapheneOS is perfectly functional. The main difficulty is the chats services that are used by my family, friends, and work-related "group chats". I have convinced some people to join my XMPP server, including my mom (wuhuu), but it is an uphill battle. That's why the other phone is still essential for me.

[–] max@nano.garden 1 points 2 years ago* (last edited 2 years ago)

Thanks. In the future I work using the Reproducible Builds practices and use OpenBSD to sign my builds.

In the immediate situation I want to know whether there is a way to use GitHub as my trusted third-party builder. I would like to share something with people - some of who might not have the skills to replicate the build themselves, but I still would like to be able to point them to something that is easy to understand and give them argument.

My current argument is: "See, in the github logs you can see that github generated that hash internally during the workflow, and it matches the hash of the file that you have downloaded. So this way you can be sure that this build really comes from this source code, which was only changed here and there". Of course I need to make absolutely sure that my argument is solid. I know that I'm not being malicious, but I don't want to give them an argument of trust and then find out that I have mislead them about the argument, and that it was in fact possible to fake this.

[–] max@nano.garden 2 points 2 years ago (5 children)

I think you can even upload release files manually, independently of if you use actions or not, so it can never be guaranteed that it was built from the sources.

True, but that's why my current idea is the following:

As part of the wortkflow, GitHub will build the executable, compute a few different hashes (sha256sum, md5, etc..), and those hashes will be printed out in the GitHub logs. In that same workflow, GitHub will upload the files directly to the release.

So, if someone downloads the executable, they can compute the sha256sum and check that it matches the sha256 that was computed by github during the action.

Is this enough to prove that executable they are downloading the same executable that GitHub built during that workflow? Since a workflow is associated a specific push, it is possible to check the source code that was used for that workflow.

In this case, I think that the only one with the authority to fake the logs or mess with the source during the build process would be GitHub, and it would be really hard for them to do it because they would need to prepare in advance specifically for me. Once the workflow goes through, I can save the hashes too and after that both GitHub and I would need to conspire to trick the users.

So, I am trying to understand whether my idea is flawed and there is a way to fake the hashes in the logs, or if I am over-complicating things and there is already a mechanism in place to guarantee a build.

[–] max@nano.garden 5 points 2 years ago* (last edited 2 years ago)

I think that any step that facilitates verifying the build is great. If trust is required, then I should simply not release any executables if I want to remain anonymous. I would like to be able to release executables without needing to ask people to blindly trust me. I would like to be able to show them reasonably good evidence that the program is built from the source that I say it is.

 

I have forked a project's source code on GitHub. The program takes a private key as an input and that key must never leave the client. If I want to share a pre-built executable as a release it is essential that I can prove beyond reasonable doubt that it is built from the published source.

I have learned about how to publish the releases by using a Workflow in the GitHub actions such that GitHub itself will build the project and then repare a release draft with the built files as well as the file hashes..

However, I noticed that the release is first drafted, and at that point I have the option to manually swap the executable and the hashes. As far as I can tell, a user will not be able to tell if I swapped a file and its corresponding hashes. Or, is there a way to tell?

One potential solution that I have found is that I can pipe the output of the hashing both to a file that is stored and also to the publicly visible logs by using "tee". This will make it such that someone can look through the logs of the build process and confirm that the hashes match the hashes published in the release.

Like this:

I would like to know whether:

  • There is already some built-in method to confirm that a file is the product of a GitHub workflow

  • The Github Action logs can easily be tampered by the repo owner, and the hashes in the logs can be swapped, such that my approach is still not good enough evidence

  • If there is another, perhaps more standard method, to prove that the executable is built from a specific source code.

 

cross-posted from: https://feddit.de/post/2396303

Bad actors are actively exploiting this flaw to steal funds from affected wallets on multiple blockchains, they say.

 

How does it work?

Transactions/blocks are recorded live from the Nano currency network. Amounts are presented as floating text (>x = receive, x> = send). During each melody a new melody of max 64 notes is created by interpreting incoming block hash, type and amount.

For every new block recorded, the melody is extended and old notes are discarded.

If blocks arriving faster than 64 per 8sec interval, they are ignored.

 

The difference between a Principal Representative and a regular Representative with a voting weight of < 0.1% is that the votes of the Principal Representative are re-broadcasted by other nodes, and the votes of the regular Representatives are only communicated directly by the regular Representative.

What this means is that, if we had a network where every node holds less than 0.1% of the voting weight, then the practical difference is that a given node would need have direct communication with each voting node to collect their vote until they have enough votes to exceed the threshold - so we can't take advantage of the shortcut of re-broadcasting.

Is my understanding correct? Or is there something more to the distinction between the types?

 

I am currently running a non-voting node, and I am considering flipping it into a voting representative node (with much less than 0.1% voting weight).

What is stopping me from doing this is that I would like to ensure that my node is a positive addition to the network, and not a hindrance.

What metrics can I look during the operation of a node to determine whether it is contributing in a positive way to the network?

view more: next ›