baseless_discourse

joined 1 year ago
[–] [email protected] 1 points 4 days ago

I think modern coffee is judged by how much the tastes reflect its distinct characteristics, which includes physical characteristic of the farm (altitude etc), fermentation process, and roasting process.

It takes a lot of work to produce good coffee, and the end result should let these efforts shine. Acidity, fragrance, and funk are great ways to communicate the life of the coffee to the taster. That is why they are typically the standard to determine good coffee, instead of generic and monotone"smoothness" that is shared across kirkland signature, peets, starbucks, and gas station coffees.

[–] [email protected] 4 points 4 days ago

Please don't, it is not only a product of colonialism injustice, it is also produced in extremely inhumane manner. To make the matter even worse, the industry consensus is that they tastes terrible.

Related video: https://www.youtube.com/watch?v=pkbuFwHnJQY

[–] [email protected] 1 points 1 week ago

I have figured out the problem. Turns out although in ip a, the MAC is the same, yet in the vm, the MAC is different.

As for being assigned the same IP, it is just my stupid router... Getting a new MAC (creating a new VM) fixes the issue.

[–] [email protected] 1 points 1 week ago (1 children)

Sorry, a stupid question.

If the bridge is the primary interface on host, and the homeassistant KVM also uses this bridge, will this cause them to get the same IP again?

Thank you for your patience.

[–] [email protected] 1 points 1 week ago (3 children)

Thank you for the configuration. I wonder, because you have turn dhcp off for the host, will this prevent the host from getting an ip address?

[–] [email protected] 1 points 1 week ago (1 children)

Two of the same MAC address can't exist in the same IP space, else the router can't route packets to them.

Yes, this seems to be my problem, both the host and the vm got the same IP, and I think I cannot send any traffic to either my host or vm. So my router is probably confused, as you suggested.

Is there an issue with using Docker for this?

I forgot to mention this, docker indeed work. However, ha requires a privileged docker running as root, which means ha essentially runs as root on the host.

This is fine on dedicated hardware, but as my server have other infrastructure on there, running ha as root can be a security risk.

[–] [email protected] 1 points 1 week ago (5 children)

Hi, thank you for your reply.

Did you make sure that eno2 is enslaved by br0? When br0 is created, it indeed have a unique MAC, but once it enslaves the hardware, it inherits the hardware address.

I have not tried to get the bridge going with virsh, but I was unsuccessful with the virt-manager ui. And I assume they use the same system?

It is possible I have accidentally disabled some network virtualization kernel component during setup, as I have implemented some mods from secureblue. I will try to reset everything to default, and try again.

Do you have a tutorial for making bridge via virsh that you can share.

 

cross-posted from: https://mander.xyz/post/16531580

I have tried to follow several tutorial to setup using either ip or nmtui:

However, the bridge inherits the MAC address of host after enslaving the host hardware enp1s0.... This causes my router to give both the host and the bridge the same ip address, making the ha instance inaccessible.

The red hat tutorial clearly show that the bridge and the host have different IP, so I was wondering if I am doing something wrong.


Alternatively, I can set the home assistant vm to run in NAT and port forward from host, but I have several devices that communicate over different ports. So it would be annoying to forward all these ports. Not to mention, many appliances don't have documentation about the ports they use.

I can also potentially use virtualbox, but it is not well supported on silverblue, especially with secureboot enabled.

 

cross-posted from: https://mander.xyz/post/16531247

I have tried to follow several tutorial to setup using either ip or nmtui:

However, the bridge inherits the MAC address of host after enslaving the host hardware enp1s0.... This causes my router to give both the host and the bridge the same ip address, making the ha instance inaccessible.

The red hat tutorial clearly show that the bridge and the host have different IP, so I was wondering if I am doing something wrong.


alternatively, I can set the home assistant vm to run in NAT and port forward from host, but I have several device that communicate over different ports. So it would be annoying to forward all these ports. Not to mention, many appliances don't have documentation about the ports they use.

I can also potentially use virtualbox, but it is not well supported on silverblue, especially with secureboot enabled.

 

I have tried to follow several tutorial to setup using either ip or nmtui:

However, the bridge inherits the MAC address of host after enslaving the host hardware enp1s0.... This causes my router to give both the host and the bridge the same ip address, making the ha instance inaccessible.

The red hat tutorial clearly show that the bridge and the host have different IP, so I was wondering if I am doing something wrong.


alternatively, I can set the home assistant vm to run in NAT and port forward from host, but I have several devices that communicate over different ports. So it would be annoying to forward all these ports. Not to mention, many appliances don't have documentation about the ports they use.

I can also potentially use virtualbox, but it is not well supported on silverblue, especially with secureboot enabled.

[–] [email protected] 59 points 2 weeks ago (1 children)

From my read, this is not even for marketing, but mainly for feedback to improve framework products. Framework will also have merch packages for the ambassador.

These ambassadors would attend linux conferences anyways, framework just want them to communicate to frameworks when anyone have any feedback.

I am okay with this.

[–] [email protected] 4 points 3 weeks ago (1 children)

divest has all the graphene hardening and have unprivileged microg, it also runs on a much wider range of devices.

[–] [email protected] 2 points 3 weeks ago

I agree, all they need to make sure is for their own tool to have the same access as everyone else.

[–] [email protected] 3 points 3 weeks ago* (last edited 3 weeks ago)

According to wikipedia, both Windows and linux have it, and both are open source.

Believe it or not, a lot of companies, no matter how cool and secure their marketing sounds, are just seriously incompetent.

12
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Hi all Nix experts,

I recently started using nix to manage my dev environment on my immutable distro, and I need some help.

I was wondering if I am using a large package like TexLiveFull, how to make sure nix don't delete large packages after I close the shell? I also don't want this package to be available in my global environment, as I don't need to use it outside vscode.

Another question is how to keep my packages up-to-date. I don't do serious development work, thus I typically perfer my package and dev-tools to be on the latest version. I prefer to have a little management of this as possible. Ideally, every time I start up a nix shell, the package manager will grab the latest version of the package if possible without requiring additional interaction from me. Is this possible?

Finally, is there any way to bubblewrap programs installed by nix to only access the file within the starting path of the shell? I don't imagine this is possible, but it would definitely be nice if nix has some security feature like this.

Thanks in advance for your help! I understand parts of this post might be ridiculous. I am still new to nix. Please correct me if I am not using nix in the "correct" way.

0
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
29
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/[email protected]
 

Inspired by the video by Gem and Impulse from Hermitcraft. In the video, Impulse told Gem (a Canadian) that he had hawaiian pizza with canadian bacon on it; and Gem got really confused in what constitute a "Canadian bacon".

Apparently Canadian bacon is called "Canadian" because it is originally imported from Canada to New York. Not because it is popular or invented in Canada.

On the other hand, Hawaiian pizza is a true cultural amalgamation. It is invented by a Greek in Canada inspired by his experience cooking Chinese food. One of the culture it doesn't connect to is Hawaii, its name comes from the brand of pineapple the inventor was using.

https://en.m.wikipedia.org/wiki/Back_bacon https://en.m.wikipedia.org/wiki/Hawaiian_pizza

 
25
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

I have setup my fedora to use LUKS encryoted partitions. But entering two passwords gets quite tiring, as I shutdown my laptop quite often to get the benefit of LUKS (I am assuming nothing is encrypted when in suspend, please correctme if I am wrong)

I am thinking about setting up TPM autodecrypt. However, I was wondering does the decryption happen on boot or after I login?

If it happens on boot, then it seems like the benefit is pretty limited compare to a unencrypted drive. Since the attacker can simply boot my laptop and get the unecrypted drive.

Am I missing something here? I was wondering is there a way for me to enter my password once and unlock everything, from disk to gnome keyring?

 

Just a curiosity. Theoretically FRP (factory reset protection) can use the current login password as a way of authentication after reset. But everything on the web states that you will need a Google account to take advantage of he feature.

 

Video by The Verge.

view more: next ›