[-] [email protected] 1 points 6 days ago

Oh, I was only aware of credits where the lender sets the amount to be the total exactly spread over the period, those are the only ones I've seen and taken, so each month I get a charge for the amount needed to keep up with the credit.
For the rest then it makes sense how they make money, since I've had credit cards which don't show or at the very least hide the amount to not pay interest and only tell you the minimum payment.

57
submitted 1 week ago by [email protected] to c/[email protected]

I mean, the price of the product is the same, I'm taking a loan for the duration of the credit but paying no interest?
What's the catch?
I can keep my money making a bit of interest instead of giving it right away and without increasing the price of what I was already planning to buy. When or why wouldn't I choose 0% credits?

20
submitted 1 month ago by [email protected] to c/[email protected]

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

40
submitted 1 month ago by [email protected] to c/[email protected]

I need to help auditing a project from another team.
I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit.
I mean I might as well read the whole repo, but maybe that's too much?

Any help or pointers on what I need to investigate to get started would be great!

17
submitted 1 month ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.pe1uca.dev/post/1136490

I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc

It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case.
Could there be a limit on the motherboard? On the CPU?
If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux.

What I want to avoid is buying it and being unable to use an 8TB drive.

13
submitted 1 month ago by [email protected] to c/[email protected]

I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc

It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case.
Could there be a limit on the motherboard? On the CPU?
If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux.

What I want to avoid is buying it and being unable to use an 8TB drive.

37
submitted 1 month ago by [email protected] to c/[email protected]

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

[-] [email protected] 57 points 1 month ago

It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you

As if it didn't happen already

19
submitted 2 months ago by [email protected] to c/[email protected]

I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.

Shared storage

This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.

setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./
setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./
setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

Then on the server which will use the data I just mounted it

remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

And included the mount as the caddy storage

{
	storage file_system /path/to/local/storage
}

On demand

This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.

@certificate host cert.localhost
handle @certificate {
	@blocked not remote_ip <requester_ip>
	respond @blocked "Denied" 403

	@ask {
		path /ask*
		query domain=my.domain domain=jellyfin.my.domain
	}
	respond @ask "" 200

	@askDenied `path('/ask*')`
	respond @askDenied "" 404

	root * /path/to/certs
	@crt {
		path /cert.crt
	}
	handle @crt {
		rewrite * /wildcard_.my.domain.crt
		file_server
	}

	@key {
		path /cert.key
	}
	handle @key {
		rewrite * /wildcard_.my.domain.key
		file_server
	}
}

Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.

package main

import (
	"io"
	"net/http"
	"strings"

	"github.com/labstack/echo/v4"
)

func main() {
	e := echo.New()

	e.GET("/ask", func(c echo.Context) error {
		if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") {
			return c.String(http.StatusOK, domain)
		}
		return c.String(http.StatusNotFound, "")
	})

	e.GET("/cert.pem", func(c echo.Context) error {
		crtResponse, err := http.Get("https://cert.localhost/cert.crt")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		crtBody, err := io.ReadAll(crtResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		defer crtResponse.Body.Close()
		keyResponse, err := http.Get("https://cert.localhost/cert.key")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		keyBody, err := io.ReadAll(keyResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}

		return c.String(http.StatusOK, string(crtBody)+string(keyBody))
	})

	e.Logger.Fatal(e.Start(":1323"))
}

And in the CaddyFile request the certificate to this service

{
	on_demand_tls {
		ask http://localhost:1323/ask
	}
}

*.my.domain {
	tls {
		get_certificate http http://localhost:1323/cert.pem
	}
}
[-] [email protected] 35 points 3 months ago

Well, I'm just starting with serious backups, AFAIK you only need to backup the data which you can't replicate.

Low seeded torrents are just hard to get, but not impossible. Personal photos, your notes, any other files generated by you are the ones which need backups.

10
submitted 3 months ago by [email protected] to c/[email protected]

Seems the SSD sometimes heats up and the content disappears from the device, mostly from my router, sometimes from my laptop.
Do you know what I should configure to put the drive to sleep or something similar to reduce the heat?

I'm starting up my datahoarder journey now that I replaced my internal nvme SSD.

It's just a 500GB one which I attached to my d-link router running openwrt. I configured it with samba and everything worked fine when I finished the setup. I just have some media files in there, so I read the data from jellyfin.

After a few days the content disappears, it's not a connection problem from the shared drive, since I ssh into the router and the files aren't shown.
I need to physically remove the drive and connect it again.
When I do this I notice the somewhat hot. Not scalding, just hot.

I also tried this connecting it directly to my laptop running ubuntu. In there the drive sometimes remains cool and the data shows up without issue after days.
But sometimes it also heats up and the data disappears (this was even when the data was not being used, i.e. I didn't configure jellyfin to read from the drive)

I'm not sure how I can be sure to let the ssd sleep for periods of time or to throttle it so it can cool off.
Any suggestion?

6
submitted 3 months ago by [email protected] to c/[email protected]

I started fiddling with my alias service and started wondering what approach other people might take.
Not necessarily the best option but what do you prefer? What are the pros and cons you see with each option?

Currently I'm using anonaddy and proton, so I have a few options to create aliases.

  • The limited shared domain aliases (from my current subscription level)
    Probably the only option to not be tracked if it would be unlimited, I'd just have to pay more for the service.
  • Unlimited aliases with a subdomain of the shared domain
    For example: baked6863.addy.io
  • Unlimited aliases with custom domain.
  • Unlimited aliases with subdomain in custom domain.
    This is different from the one above since the domain could be used for different things, not dedicated to email.
  • Catch-all with addy.
    The downside I've read is people could spam any random word, and if then disabled the people that had an incorrect alias wouldn't be able to communicate anymore.
  • Catch-all with proton.
    Since proton has a limit on how many email addresses you actually have, so when you receive an email to an alias and want to replay to it you'll be doing it from the catch-all address instead of the alias.

What do you think?
What option would you choose?

2
submitted 4 months ago by [email protected] to c/[email protected]

I started delving into world and dungeon generation with different techniques.
The one I want to try is wave function collapse.

There are several videos and repos explaining and showcasing how it works and how it can be used to generate an infinite world.

One question I have and haven't seen any mention about is, how do I recreate/reload the map from any point other than the original starting one?

So, AFAIK the algorithm start from a few tiles/pixels in a starting position, or picking their position at random, and then can collapse the rest of the map with the set of rules given to the building blocks, but if these starting tiles/pixels are far away after a player saves, then I can only think about having to start from them again to reach the saved point to be able to show the same world which of course could mean a very long loading screen.

Maybe the save can include the current seed, but then it can advance differently when the player goes back, which means the algorithm would generate a different portion of the map.
How can I ensure the world would be regenerated as it was?

While writing this I'm thinking I could be generating the seed of a block of tiles/pixels based on the seed of neighboring blocks and the coordinates in the map, something like left: seed+X, right: seed-Y, where X and Y are calculated based on the coordinate of the block.
This way I can save the seed of the current block and easily recalculate the seed used to generate all the adjacent blocks.
What do you think about this approach?

11
submitted 5 months ago by [email protected] to c/[email protected]

I have an old android tablet (and several phones) that I want to use for small applications in my home automation.
For the most part just to show a web page to quickly click something to activate or read the status.

My issue is the OS installed is very old and of course there are no official updates.
Looking for custom roms they are also somewhat old because the age of the devices, and everyone says "don't use the rom of one device into another even if the models are very similar".

So, my question is, what are my options if I can't use a pre-built rom?
Could I keep the same OS and just restrict access to only my internal network?
Not sure if I'm being too paranoid about security risks using these devices to just connect to my services.

[-] [email protected] 36 points 7 months ago

I recently switched to ubuntu in a gaming laptop, right now I've been using it just for jellyfin and some other coding tasks, but it definitely runs smoother, more stable, quicker, and cooler than windows did for the same workload.
I was surprised at the difference of even just having the machine idle, on windows it was noticeable warm, now on ubuntu it's almost as if it has been turned off.

8
submitted 7 months ago by [email protected] to c/[email protected]

What's your recommendation for a selfhosted services to stream some private videos from S3 compatible service (vultr)?

I was thinking a private peertube instance could work, but it requires the S3 files to be public and allow all origins, so I don't like that idea.

The other one was to use rclone mount to have it as another block storage, but I don't know what are the cons of this, or if it's possible to use it with this kind of services.

This won't be for my camera videos (already have immich) nor for series/movies (jellyfin). It'll be for random videos from youtube, or twitch which I want to hoard.

(Also if you have a recommendation for cheap online storage for this it'll be appreciated, Vultr's is $0.006/GB)

[-] [email protected] 45 points 7 months ago

It's funny they think 5 seconds of no content is worst of 10~30 seconds of ads.

[-] [email protected] 49 points 7 months ago

Windows: you're going to use wsl, right?

[-] [email protected] 39 points 7 months ago* (last edited 7 months ago)

Well, not only this data, all activity on lemmy is public since it needs to be federated (sent to all instances subscribed to the community will receive all activity).
Which means any person can track anyone if they subscribe to the same communities the user's instance has.

AFAIK the only activity not sent is saved content, and downvotes from content hosted in instances which disabled them.

EDIT: for more example, here's my upvote to this post

"actor":"https://lemmy.pe1uca.dev/u/pe1uca","object":"https://sh.itjust.works/post/8931097","type":"Like","id":"https://lemmy.pe1uca.dev/activities/like/f6b0cced-4e1c-41d7-bf11-349b680c4d84","audience":"https://lemmy.one/c/privacyguides"  

And here's the original comment

actor":"https://lemmy.pe1uca.dev/u/pe1uca","to":["https://www.w3.org/ns/activitystreams#Public"],"object":{"type":"Note","id":"https://lemmy.pe1uca.dev/comment/1434121","attributedTo":"https://lemmy.pe1uca.dev/u/pe1uca","to":["https://www.w3.org/ns/activitystreams#Public"],"cc":["https://lemmy.one/c/privacyguides","https://sh.itjust.works/u/andrew_bidlaw"],"content":"","mediaType":"text/markdown"},"published":"2023-11-11T04:07:31.962497+00:00","tag":[{"href":"https://sh.itjust.works/u/andrew_bidlaw","name":"@[email protected]","type":"Mention"}],"distinguished":false,"language":{"identifier":"en","name":"English"},"audience":"https://lemmy.one/c/privacyguides"},"cc":["https://lemmy.one/c/privacyguides","https://sh.itjust.works/u/andrew_bidlaw"],"tag":[{"href":"https://sh.itjust.works/u/andrew_bidlaw","name":"@[email protected]","type":"Mention"}],"type":"Create","id":"https://lemmy.pe1uca.dev/activities/create/7a1c726e-0191-4a71-8980-a565727ac52d","audience":"https://lemmy.one/c/privacyguides"   

And all instances which are subscribed to this community need to receive this information to keep it updated.

[-] [email protected] 122 points 9 months ago

I never understood this, it's your selfhosted server but you kind of don't own it and depend on them, so you just have an application which depends on a their service which means plex isn't 100% selfhostable, correct?

[-] [email protected] 25 points 9 months ago

if a player deletes and reinstalls a game, that counts for two installs and two charges. Ditto for players installing a single game on two devices.

Wow, I'm without words, I can only imagine the deals to be ended on games in places like Netflix or the Google play pass

[-] [email protected] 54 points 11 months ago

I just started using rss for the communities I still want to know about.
You only need to add the reddit name of the community and .rss at the end in your reader.
For example https://www.reddit.com/r/technology/hot.rss

[-] [email protected] 26 points 1 year ago

I didn't even know about core-js until the dev complained about all the sites which use it. https://github.com/zloirock/core-js/blob/master/docs/2023-02-14-so-whats-next.md

[-] [email protected] 34 points 1 year ago

I bet you were really happy seeing that sub at that moment.

view more: next ›

pe1uca

joined 1 year ago