17
submitted 1 month ago by [email protected] to c/[email protected]

cross-posted from: https://lemmy.ml/post/16693054

Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

  1. CI tool identifies new version of service and creates a pull request
  2. Manually merge pull request
  3. CD tool identifies changes to Git repo
    1. CD tool creates data snapshot and/or data backup
    2. CD tool deploys update
  4. Issue with deployment identified that requires rollback
    1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
    2. CD tool identifies the rolled back version
      1. (OPTIONAL) CD tool creates data snapshot and/or data backup
      2. CD tool reverts to snapshot taken prior to upgrade
      3. CD tool deploys service to prior version per the Git repo
  5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
8
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]

Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

  1. CI tool identifies new version of service and creates a pull request
  2. Manually merge pull request
  3. CD tool identifies changes to Git repo
    1. CD tool creates data snapshot and/or data backup
    2. CD tool deploys update
  4. Issue with deployment identified that requires rollback
    1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
    2. CD tool identifies the rolled back version
      1. (OPTIONAL) CD tool creates data snapshot and/or data backup
      2. CD tool reverts to snapshot taken prior to upgrade
      3. CD tool deploys service to prior version per the Git repo
  5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
[-] [email protected] 21 points 2 months ago

Congrats on getting everything working - it looks great!

One piece of (unprovoked, potentially unwanted) advice is to setup SSL. I know you're running your services behind Wireguard so there isn't too much of a security concern running your services on HTTP. However, as the number of your services or users (family, friends, etc.) increases, you're more likely to run into issues with services not running on HTTPS.

The creation and renewal of SSL certificates can be done for free (assuming you have a domain name already) and automatically with certain reverse proxy services like NGINXProxyManager or Traefik, which can both be run in Docker. If you set everything up with a wildcard certificate via DNS challenge, you can still keep the services you run hidden from people scanning DNS records on your domain (ie people won't know that an SSL certificate was issued for immich.your.domain). How you set up the DNS challenge will vary by the DNS provider and reverse proxy service, but the only additional thing that you will likely need to set up a wildcard challenge, regardless of which services you use, is an email address (again, assuming you have a domain name).

[-] [email protected] 8 points 3 months ago

Didn't look at the repo thoroughly, but I can appreciate the work that went into this.

  • Is there any reason you went this route instead of just using an user-overrides.js file for the standard arkenfox user.js file?
  • Does the automatic dark theme require enabling any fingerprintable settings (beyond just possobly determining the theme of the OS/browser)?
  • How are you handling exceptions for sites? I assumed it would be in the user.js file, but didn't notice anything in particular handling specific URLs differently.
[-] [email protected] 23 points 6 months ago

Calls made from speakers and Smart Displays will not show up with a caller ID unless you’re using Duo.

Is it possible to use Duo still? Google knows it discontinued/merged Duo with Google Meet nearly 18 months ago, right?

[-] [email protected] 7 points 6 months ago

I don't understand what point you are trying to make. Mozilla has several privacy policies that cover its various products and services which all seem to follow Mozilla's Privacy Principles and Mozilla's overarching Privacy Policy. Mozilla also has documentation regarding data collection.

The analytics trackers that you mentioned would fall under Mozilla's Websites Privacy Policy, which does state that it uses Google Analytics and can be easily verified a number of ways such as the services you previously listed.

However, Firefox sync uses https://accounts.firefox.com/ which has its own Privacy Policy. There is some confusion around "Firefox Accounts" as it was rebranded to "Mozilla Accounts", which again has its own Privacy Policy. There is no indication that data covered by those policies are shared with Google. If Google Analytics trackers on Mozilla's website are still a concern for these services, you can verify that the Firefox Accounts and Mozilla Accounts URLs do not contain any Google Analytics trackers.

Firefox has a Privacy Policy as well. Firefox's Privacy Policy has sections for both Mozilla Accounts and Sync. Neither of which indicate that data is shared with Google. Additionally, the data stored via the Sync service is encrypted. However, there is some telemetry data that Mozilla collects regarding Sync and more information about it can be found on Mozilla's documentation about telemetry for Sync.

The only thing that I could find about Firefox, Sync, or Firefox Accounts/Mozilla Accounts sharing data with Google was for location services within Firefox. While it would be nice for Firefox not to use Google's geolocation services, it is a reasonable concession and can be disabled.

Mozilla is most definitely not a perfect company, even when it comes to privacy. Even Firefox has been caught with some privacy issues relatively recently with the unique installation ID.

Again, I'm not saying that Mozilla is doing nothing wrong. I am saying that your "evidence" that Mozilla is sharing Firefox, Sync, or Firefox Accounts/Mozilla Accounts data with Google because of Google Analytics trackers on some of Mozilla's websites is coincidental at best. Without additional evidence, it is misleading or flat out wrong.

[-] [email protected] 15 points 6 months ago

https://changedetection.io/

Change Detection can be used for several use cases. One of them is monitoring price changes.

[-] [email protected] 77 points 8 months ago

tl;dr: A notable marketshare of multiple browser components and browsers must exist in order to properly ensure/maintain truly open web standards.

It is important that Firefox and its components like Gecko and Spidermonkey to exist as well as maintain a notable marketshare. Likewise, it is important for WebKit and its components to exist and maintain a notable marketshare. The same is true for any other browser/rendering/JavaScript engines.

While it is great that we have so many non-Google Chrome alternatives like Chromium, Edge, Vivaldi, etc., they all use the same or very similar engines. This means that they all display and interact with websites nearly identically.

When Google decides certain implementation/interpretation of web standards, formats, behavior, etc. should be included in Google Chrome (and consequently all Chromium based browsers), then the majority marketshare of web browsers will behave that way. If the Chrome/Chromium based browsers reaches a nearly unanimous browser marketshare, then Google can either ignore any/all open web standards, force their will in deciding/implementing new open web standards, or even become the defacto open web standard.

When any one entity has that much control over the open web standards, then the web standards are no longer truly "open" and in this case becomes "Google's web standards". In some (or maybe even many) cases, this may be fine. However, we saw with Internet Explorer in the past this is not something that the market should allow. We are seeing evidence that we shouldn't allow Google to have this much influence with things like the adoption of JPEG XL or implementation of FLoC.

With three or more browser engines, rendering engines, and browsers with notable marketshares, web developers are forced to develop in adherence to the accepted open web standards. With enough marketshare spread across those engines/browsers, the various engines/browsers are incentivized to maintain compatibility with open web standards. As long as the open web standards are designed and maintained without overt influence by a single or few entities and the open standards are actively used, then the best interest of the collective of all internet users is best served.

Otherwise, the best interest of a few entities (in this case Google) is best served.

[-] [email protected] 6 points 10 months ago

Your options will depend on how much effort you are willing to put in and what other services you have access to (or are willing to run).

For example, do you have a Network Video Recorder (NVR) or something like Home Assistant that can consume a Real-Time Messaging Protocol (RTMP) or Real Time Streaming Protocol (RTSP) video feed? Can you modify your network to block all internet traffic to/from the doorbell? Are you comfortable using a closed source, proprietary app to setup the doorbell? Is creating your own doorbell feasible?

I'm not aware of a doorbell that you can buy which meets all of your requirements without at least one of the items I mentioned above. Additionally, I believe the only doorbell that meets all your requirements is building your own doorbell. However, some other brands that will get close to meeting your requirements are Reolink and Amcrest.

[-] [email protected] 6 points 10 months ago

This integration won't allow you to do that. Python will not run locally, but instead on Microsoft's platform (likely Azure).

If you're just reading some simple data from Excel, there are several ways of accomplishing this already. For example, Pandas has read_excel() and there is also openpyxl. You could even use those tools to write the results back to Excel. Things get more complicated though if the Excel file is something more than just a simple list.

[-] [email protected] 6 points 10 months ago

Will this actually automate your workflow?

It seems that this Python integration expects that the source data already exists within the Excel file and Python can essentially just be used to create either visuals or new tables within the same Excel file.

If that's accurate, then this is intended exclusively for data analysis and not process automation. I don't think this will allow people to enhance their existing Python based ETL jobs or create new ones because of this new integration. This does not seem to be a replacement/substitute for VBA or OfficeScripts. It also does not seem to be an alternative to Power Query. If anything, this seems to be most similar to Power Pivot.

[-] [email protected] 7 points 11 months ago

There are a few recommendations for the PineTime in this thread. It is a great privacy focused smartwatch, but I don't think you would be happy with it based on your requirements. It is not a device that allows you to go for a run and keep your phone at home.

The storage on the device is extremely limited, which prevents you from playing any audio (eg songs, podcasts, etc) directly. The device does not have any wireless connectivity (outside of Bluetooth) so it cannot stream any audio either. I'm not certain if you can even connect it to wireless headphones. It does not have any speakers either.

The watch has some apps, but there are no apps that are well suited for fitness. It does count steps well, but it does not directly calculate distance, pace, etc. It also does heart rate, but, currently, the watch screen must be on for it to record the heart rate. I think the longest the watch screen will stay on for is 30 minutes without any interaction, which may be too short for long runs or bike rides. Additionally, I'm not aware of any GPS/location tracking functionality.

Lastly, since the apps are limited and there is no advanced wireless functionality, you can't use it for things that you may be used to for on the go activities. For example, you won't be able to use it to pay for a drink half way through a run or call someone if you hurt your ankle a few miles from your destination.

With all that said, I still highly recommend the PineTime as a privacy focused, FLOSS, smartphone companion, smart watch. I don't think you'll find these features in any other device, particularly at this price point. However, you will be extremely disappointed with it if you're getting it so you can take it on runs while leaving your phone at home.

2
submitted 11 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]

I'm trying to find a video that demonstrated automated container image updates for Kubernetes, similar to Watchtower for Docker. I believe the video was by @[email protected] but I can't seem to find it. The closest functionality that I can find to what I recall from the video is k8s-digester. Some key features that were discussed include:

  • Automatically update tagged version number (eg - Image:v1.1.0 -> Image:v1.2.0)
  • Automatically update image based on tagged image's digest for tags like "latest" or "stable"
  • Track container updates through modified configuration files
    • Ability to manage deploying updates through Git workflows to prevent unwanted updates
  • Minimal (if any) downtime
  • This may not have been in the video, but I believe it also discussed managing backups and rollback functionality as part of the upgrade process

While this tool may be used in a CI/CD pipeline, its not limited exclusively to Git repositories as it could be used to monitor container registries from various people or organizations. The tool/process may have also incorporated Ansible.

If you don't know which video I'm referring to, do you have any suggestions on how to achieve this functionality?

EDIT: For anyone stumbling on this thread, the video was Meet Renovate - Your Update Automation Bot for Kubernetes and More! by @[email protected], which discusses the Kubernetes tool Renovate.

[-] [email protected] 12 points 11 months ago

I'm not aware of any great FOSS/FLOSS Tasker alternatives. There are a few options, but they will be less capable, functional, extensible, user friendly, or modern.

More direct alternatives

Requires a server to run automations/dcripts

Requires scripts and may require a server and/or additional add-on apps

[-] [email protected] 5 points 1 year ago

I know the project may still be in its infancy, but are there any current/prospective screenshots or design files for this initiative? Even better - is a live demo available?

4
submitted 2 years ago by [email protected] to c/[email protected]

I've been looking for something "official" from the Librewolf team regarding running Librewolf in Docker, but I haven't found much. There are a few initiatives that seem to support Librewolf Docker containers (eg Github, Docker Hub), but they don't seem to be referenced much nor heavily used. However, maybe the reason I don't see it much is that there are better ways to achieve what I'm looking for.

  • Better separation from daily OS environment and regular browsing environment
  • Ability to run multiple instances privacy friendly browser and isolate each instance for particular use cases
  • Configure each instance to be run over different VPNs (or no VPN at all)

Is there a way to best achieve this?

view more: next ›

rhymepurple

joined 2 years ago