raldone01

joined 2 years ago
[–] raldone01@lemmy.world 2 points 16 hours ago

I actually like the taste of unseasoned food.

[–] raldone01@lemmy.world 1 points 2 weeks ago

I guess lowest I would go is -30°C. Ideally something varying between 20°C and -30°C.

[–] raldone01@lemmy.world 1 points 2 weeks ago

Fixed. Blame it on typing this on mobile and being in a rush to the underground.

[–] raldone01@lemmy.world 3 points 2 weeks ago

Hahah I guess both. You learn a lot with either. 😅

[–] raldone01@lemmy.world 4 points 2 weeks ago (2 children)

I like to think that I am good at root causing software issues.

[–] raldone01@lemmy.world 26 points 2 weeks ago* (last edited 2 weeks ago) (10 children)

Cold*. You can always put on more but once you're naked, well you're naked.

*Limits apply.

[–] raldone01@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

I don't think the promise chain is really needed here.

I used this script:

import Axios from 'axios'
import OldFS from 'fs'
import { PromiseChain } from '@feather-ink/ts-utils'

const fs = OldFS.promises

const image = process.argv[2]
const destination = `http://${process.argv[3]}/vfs/ota`
const now = process.argv[4] === 'now'
const once = process.argv[4] === 'once'

async function triggerUpdate(): Promise<void> {
  console.log('Uploading new binary')
  const file = await fs.readFile(image)

  await Axios({
    method: 'POST',
    url: destination,
    headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': file.byteLength
    },
    data: file
  })
  console.log('Finished uploading')
}

(async () => {
  const updateChain = new PromiseChain()
  console.log(`Watching file '${image}' for changes\nWill upload to '${destination}'!`)
  if (once) {
    await triggerUpdate()
    return
  }
  if (now)
    await updateChain.enqueue(triggerUpdate)
  OldFS.watch(image, async (eventType) => {
    if (eventType !== 'change')
      return
    let succ = false
    do {
      try {
        console.log('Change detected')
        await updateChain.enqueue(triggerUpdate)
        succ = true
      } catch (e) {
        console.error(e)
        console.log('Retrying upload')
      }
    } while (!succ)
    console.log('Upload finished')
  })
})()

Relevent code on the esp:

You can ignore my cpp stuff and just put this in the handler of the stock webserver.

auto ota = vfs->addHandler(makeDirectory("ota"));
        {
          ota->addHandler(makeDirect([](auto &con) {
            if (con.req->method != HTTP_POST)
              return HandlerReturn::UNHANDLED;

            // https://github.com/espressif/esp-idf/tree/master/examples/system/ota/native_ota_example/main
            // https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/system/ota.html
            auto updatePartition = esp_ota_get_next_update_partition(nullptr);
            if (updatePartition == nullptr)
              return sendError(con,500, "No free ota partition found!");
            esp_ota_handle_t otaHandle;
            auto err = esp_ota_begin(updatePartition, con.req->content_len, &otaHandle);
            if (err != ESP_OK)
              return sendError(con, 500, std::string{"Can't start ota update: "} + esp_err_to_name(err), true);

            int receivedBytes = 0;
            do {
              auto end = httpd_req_recv(con.req, buf.data(), buf.size());
              // ESP_LOGE(TAG, "Received %d", receivedBytes);
              // hexDump("RECV:", buf.data(), end);
              if (end <= 0) {
                esp_ota_abort(otaHandle);
                return sendError(con, 500, "Error receiving", true);
              }
              err = esp_ota_write(otaHandle, buf.data(), end);
              if (err != ESP_OK) {
                esp_ota_abort(otaHandle);
                return sendError(con, 500, std::string{"Error writing: "} + esp_err_to_name(err), true);
              }
              receivedBytes += end;
            } while (receivedBytes < con.req->content_len);

            err = esp_ota_end(otaHandle);
            if (err != ESP_OK)
              return sendError(con, 500, std::string{"Failed to end: "} + esp_err_to_name(err), true);

            err = esp_ota_set_boot_partition(updatePartition);
            if (err != ESP_OK)
              return sendError(con, 500, std::string{"esp_ota_set_boot_partition failed: "} + esp_err_to_name(err), true);
            auto ret = sendOK(con);
            FactoryResetServiceCon().reboot(1000 / portTICK_PERIOD_MS);
            return ret;
          }));
        }

I also used a custom partition table for 2 partitions so that when my program crashes it can just go back to boot the previous version.

Here it is for reference:

partitions.csv

# Name,   Type, SubType, Offset,  Size, Flags
# Note: if you change the phy_init or app partition offset, make sure to change the offset in Kconfig.projbuild
nvs,      data, nvs,     0x011000, 0x006000,
otadata,  data, ota,     0x017000, 0x002000,
phy_init, data, phy,     0x019000, 0x001000,
ota_0,    app,  ota_0,   0x020000, 0x1F0000,
ota_1,    app,  ota_1,   0x210000, 0x1F0000,

Note: This partition table is for a special model of the ESP32 though.

Also another disclaimer: This code does not represent my current coding abilities and may be outdated - it worked well though.

[–] raldone01@lemmy.world 1 points 1 month ago

Hahahah. Awesome. Have fun! You just need a simple webserver. The builtin one will do and then you use the ota functions of the ESP IDF.

[–] raldone01@lemmy.world 3 points 1 month ago* (last edited 1 month ago) (3 children)

Back in school my friends all flashed their mcus with 4-8MB images over serial with 115200 baud. I set up ota updates over wifi. They were all fascinated by my speedy flashes. However when I offered to help them set it up, not one was interested because their setup was working as is and slow flashing is not a "bad" thing since it gave them an excuse to do other things.

We are talking minutes vs seconds here.

The teachers were surprised by my quick progress and iterations. When I told them my "trick" the gave me bonus points but also were not interested in learning how to do ota which was very easy. A simple 20 minute first time setup would have saved sooo much time during the year.

[–] raldone01@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

Very interesting. I hope this passes as an actual Standart. I looked around but couldn't find information on how to enable it in the Webbrowser. It just says firefox is not supported.

Nevermind I found the extension will try it again.

[–] raldone01@lemmy.world 2 points 1 month ago (3 children)

I would love a if there was a standard websites would use to receive donations. An integrated browser addon that track what you visit and gives you a review before distributing funds after each month would be great. It should accumulate money to avoid transaction fees for tiny amounts.

[–] raldone01@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

MacLeod is so great! Good pick.

I would probably go for Urban Conspiracy by Jules Gaia. https://music.youtube.com/watch?v=3_6ka9EV1i4

Maybe just alternate between these two.

 

I have a static ip (lets say 142.251.208.110).

I own the domain: website.tld

My registrar is godaddy.

If I want to change my nameserver godaddy won't allow me to enter a static ip. It wants a hostname. I observed that many use ns1.website.tld and ns2.website.tld.

I don't understand how this can work because ns1.website.tld would be served by my dns server which is not yet known by others.

Do I need a second domain like domains.tld where I use the registrars dns server for serving ns1.domains.tld which I can then use as the nameserver for website.tld?

I would like to avoid the registrars nameserver and avoid getting a second domain just for dns.

Thank you for your input.

 

I have two machines running docker. A (powerful) and B (tiny vps).

All my services are hosted at home on machine A. All dns records point to A. I want to point them to B and implement split horizon dns in my local network to still directly access A. Ideally A is no longer reachable from outside without going over B.

How can I forward requests on machine B to A over a tunnel like wireguard without loosing the source ip addresses?

I tried to get this working by creating two wireguard containers. I think I only need iptable rules on the WG container A but I am not sure. I am a bit confused about the iptable rules needed to get wireguard to properly forward the request through the tunnel.

What are your solutions for such a setup? Is there a better way to do this? I would also be glad for some keywords/existing solutions.

Additional info:

  • Ideally I would like to not leave docker.
  • Split horizon dns is no problem.
  • I have a static ipv6 and ipv4 on both machines.
  • I also have spare ipv6 subnets that I can use for intermediate routing.
  • I would like to avoid cloudflare.
 

A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar


🌆 Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools.


🍺 Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-use—just like my platform.

🍸 Bartender: Sure thing, Docker. One "Microservice Mojito" coming up.


🥃 Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you've been spinning up a lot of containers today.

🍺 Docker: Ah, Podman, the one who claims to be just like me but rootless. What'll it be?

🥃 Podman: I'll have what he's having but make it daemonless.


🍹 LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I've been busy setting up entire systems, right down to the init processes.

🍺 Docker: Oh, look who decided to join us. Mr. Low-Level himself!

🥃 Podman: You may call it low-level, but I call it flexibility, my friends.

🍸 Bartender: So, LXC, what can I get you?

🍹 LXC: Give me the strongest thing you've got. I need all the CPU shares I can get.


🍺 Docker: sips his mojito So, Podman, still trying to "replace" me?

🥃 Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that's all. winks

🍹 LXC: laughs While you two bicker, I've got entire Linux distributions depending on me. No time for small talk.


🍺 Docker: Ah, but that's the beauty of abstraction, my dear LXC. We get to focus on the fun parts.

🥃 Podman: Plus, I can run Docker containers now, so really, we're like siblings. Siblings where one doesn't need superuser permissions all the time.

🍹 LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more... weight to carry.


🍸 Bartender: Last call, folks! Anyone need a quick save and exit?

🍺 Docker: I'm good. Just gonna commit this state.

🥃 Podman: I'll podman checkpoint this moment; it's been fun.

🍹 LXC: Save and snapshot for me. Who knows what tomorrow's workloads will be?


And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development.

And they all knew they’d be back at it, spinning up containers, after a well-deserved system reboot.

🌙 The End.

I was bored a bit after working with podman, docker and lxc. So I asked chat gpt to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn't find anything. I really suspect it being repurposed from somewhere.

I hope you can enjoy it despite being ai generated.

view more: next ›