Kissaki

joined 2 years ago
MODERATOR OF
[–] Kissaki@programming.dev 11 points 2 months ago

Roman @rtsisyk revoked Github owner permissions from Alexander @biodranik and Viktor @vng and granted such permissions to the community contributor @pastk. This triggered Github's automatic "sanctions" check and the whole Github OM organization was automatically archived and admin access was blocked until OM's appeal was reviewed. It was unknown whether and when Github would review Organic Maps' appeal and unblock the repositories, so 2 weeks later the project migrated to the self-hosted git.omaps.dev/organicmaps instance, using the free and open source software forge Forgejo.

What the fuck? GitHub blocking the account because of automated security evaluation triggering (probably a good thing) but no review over two weeks (obviously a very bad thing)?

[–] Kissaki@programming.dev 1 points 2 months ago

What are you referring to? The reasons to fork, what a fork/forking process is, or what it means for this project?

Contributors disagreed with how the project was run and controlled. They committed to run their own project based on the other project. With more collaborative ownership and governance.

[–] Kissaki@programming.dev 5 points 2 months ago

I also want locally deleted files to be deleted on the server.

Sometimes I even move files around (I believe in directory structure) and again, git deals with this perfectly. If it weren’t for the lossless-to-lossy caveat.

It would be perfect if my script could recognize that just like git does, instead of deleting and reuploading the same file to a different location.

If you were to use Git, deleted files get deleted in the working copy, but not in history. It's still there, taking up disk space, although no transmission.

I'd look at existing backup and file sync solutions. They may have what you want.

For an implementation, I would work with an index. If you store paths + file size + content checksum you can match files under different paths. If you compare local index and remote you could identify file moves and do the move on the remote site too.

[–] Kissaki@programming.dev 0 points 2 months ago* (last edited 2 months ago)

What makes you think anyone blindly trusted it?

They pointed out how it was almost correct, and the two places they had to correct. Obviously, they verified it.

There and at other times, they talked about similar approaches of generating a starting point rather than "blindly trusting" or anything like that.

[–] Kissaki@programming.dev 5 points 2 months ago

They look at North Korea, Russia, and then the EU, and see no difference between regulation/protection and censorship.

Uncomfortable to read this on a .gov website.

I guess just like it's within the EU's right to reject and criminalize some things, and some people may not travel to the EU anymore or face arrest, the Visa conditions are within the US's discretion. The argumentation and policy are just so backward, in my eyes. Regulation does not oppose free speech. It's enabling it.

[–] Kissaki@programming.dev 1 points 2 months ago* (last edited 2 months ago) (1 children)

The linked website is not a website; it returns content-type application/activity+json.

I assume it is merely temporarily broken.

[–] Kissaki@programming.dev 4 points 2 months ago (1 children)

I'm gonna file a complaint…

[–] Kissaki@programming.dev 2 points 2 months ago

Is there new content available for download?

We'll send the list straight to your inbox.

Please let me read the article before showing me subscription onboarding popups.

Still no idea what they refer to by downloads - what I would even subscribe to.

[–] Kissaki@programming.dev 4 points 2 months ago

Blazor allows JavaScript like interactions, allows the developer to write in C# but gets rendered serverside

Blazor can compile .NET to Webassembly and run that in the web-browser.

[–] Kissaki@programming.dev 3 points 2 months ago* (last edited 2 months ago) (1 children)

Probably because it's much simpler to integrate than Jenkins.

Their own CI system 'Actions' is in open alpha.

Honestly, I'm glad they didn't use Jenkins. Managing it is a convoluted mess. (I don't have experience with Woodpecker CI nor with Forgejo Actions in particular, though.)


[–] Kissaki@programming.dev 3 points 2 months ago

Thank you for sharing. Your views and experience were very interesting and insightful.

I wanted to pick up on some of your points, but I can't seem to grasp fitting responses or meaningful thought continuations.

[–] Kissaki@programming.dev 2 points 2 months ago (2 children)

Is this an alternative to Active Directory / LDAP?

 

GitHub repo

Examples

> (15 kg/m) * 7cm
# (((15 * kg) / m)) * 7 * cm
out = 1050 * g
> 1 |> cos |> log
# 1 |> cos |> log
out = -0.6156264703860141
> display dev
# Display mode: dev (Developer)
>>> 1.5
# 1.5
out = 1.5
    # IEEE 754 - double - 64-bit
    #
    = 0x_3FF80000_00000000
    = 0x____3____F____F____8____0____0____0____0____0____0____0____0____0____0____0____0
    #    seee eeee eeee ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff
    = 0b_0011_1111_1111_1000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000
    #   63                48                  32                  16                   0
    #
    # sign    exponent              |-------------------- fraction --------------------|
    =   1 * 2 ^ (1023 - 1023) * 0b1.1000000000000000000000000000000000000000000000000000
 

I track and version my Nushell environment and configuration in a public repository.

I added a GitHub Actions workflow that tests these files. That will ensure a more defined environment and prerequisites/assumptions, given that they have to be set up in the workflow configuration. Given that I mostly work on Windows, but set up the CI to run on Linux/Ubuntu, it will also ensure platform neutrality.


Since Nushell version 0.101.0, there's no need for a default, base env.nu or config.nu, and the nu binary can be called with only the custom, minimal env and config files.

The nu binary offers --env-config and --config parameters.

I noticed that when using them, errors do not lead to error exist codes; nu will continue execution and report success despite env or config not loading [correctly]. (Bug Ticket #14745)


Do you version your environment configuration? Only locally, or with a hosted repository as a backup or to share? Do you run automated tests on it?

 

One of two Azure CDN providers was Edgio, which filed for bankruptcy.

azureedge.net dotnet CDN URLs will cease to work sometime next year after January 15th.


We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage.

We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end in azureedge.net. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward.

Affected domains:

  • dotnetcli.azureedge.net
  • dotnetbuilds.azureedge.net

Unaffected domains:

  • dotnet.microsoft.com
  • download.visualstudio.microsoft.com
6
submitted 7 months ago* (last edited 7 months ago) by Kissaki@programming.dev to c/dotnet@programming.dev
 

azureedge.net dotnet CDN URLs will cease to work sometime next year after January 15th.

One of two Azure CDN providers was Edgio, which filed for bankruptcy. CDN migration is in progress.


We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage.

We maintain multiple Content Delivery Network (CDN) instances for delivering .NET builds. Some end in azureedge.net. These domains are hosted by edg.io, which will soon cease operations due to bankruptcy. We are required to migrate to a new CDN and will be using new domains going forward.

Affected domains:

  • dotnetcli.azureedge.net
  • dotnetbuilds.azureedge.net

Unaffected domains:

  • dotnet.microsoft.com
  • download.visualstudio.microsoft.com

  • Update dotnetcli.azureedge.net to builds.dotnet.microsoft.com
  • Update dotnetcli.blob.core.windows.net to builds.dotnet.microsoft.com

We also noticed that there is a lot of use of our storage account: dotnetcli.blob.core.windows.net. Please also search for it. The storage account is unaffected, however, it would be much better for everyone if you used our new CDN. It will deliver better performance.

 
  • Simplified Startup Configuration
  • path self
  • chunk-by
  • term query
  • merge deep
  • WASM support (again)
  • sys net inclueds new columns mac and ip
  • raw string pattern matching
  • dates can now be added to durations
  • additional explore keybinds
  • version in startup banner
  • input --default
  • PowerShell script invocation on Windows
  • new introspection tools

Breaking Changes:

  • ++ operator, stricter command signature parsing (resolves silent parse errors)
  • group-by now supports "groupers" (multiple criteria)
  • timeit
  • sys cpu
  • from csv and from tsv
  • std/iter scan
  • completion sorting in custom completers, import module naming with normalization
  • display_output hook
  • du flag changes
  • Code specific environment variables updated during source
2
nmake cancel (infosec.pub)
submitted 8 months ago* (last edited 8 months ago) by Kissaki@programming.dev to c/programming_horror@programming.dev
 

ffaattaall eerrrroorr UU11005588: : tteerrmmiinnaatteedd bbyy uusseerr

 

a new NuGet dependency graph resolver built to dramatically improve performance

The new algorithm they developed uses a more streamlined approach, representing the graph as a flattened set where each node is created only once. This makes the in-memory dependency graph much smaller and easier to work with. Conflicts are resolved as the graph is being built, which avoids the need for the repetitive passes that the old dependency graph resolution algorithm required.

This new approach had dramatic results. The original dependency graph, which in our testing would create 1.6 million nodes for a complex project, was reduced to just 1,200 nodes. With fewer nodes to process, restore times dropped significantly; from 16 minutes down to just 2 minutes.

 

On November 22, 2024, Deno formally filed a petition with the USPTO to cancel Oracle’s trademark for “JavaScript.” This marks a pivotal step toward freeing “JavaScript” from legal entanglements and recognizing it as a shared public good.

Oracle has until January 4, 2025, to respond. If they fail to act, the case will go into default, and the trademark will likely be canceled.

 

On November 22, 2024, Deno formally filed a petition with the USPTO to cancel Oracle’s trademark for “JavaScript.” This marks a pivotal step toward freeing “JavaScript” from legal entanglements and recognizing it as a shared public good.

Oracle has until January 4, 2025, to respond. If they fail to act, the case will go into default, and the trademark will likely be canceled.

 

On November 22, 2024, Deno formally filed a petition with the USPTO to cancel Oracle’s trademark for “JavaScript.” This marks a pivotal step toward freeing “JavaScript” from legal entanglements and recognizing it as a shared public good.

Oracle has until January 4, 2025, to respond. If they fail to act, the case will go into default, and the trademark will likely be canceled.

view more: ‹ prev next ›