ShadowAether

joined 2 years ago
MODERATOR OF
[–] ShadowAether 5 points 2 years ago

Boost has always had ads, it's one time purchase removal

[–] ShadowAether 2 points 2 years ago

Same, I didn't bother upgrading to the G8 because of that. I wouldn't get a pixel bc they removed the headphone jack. I actually was leaning towards the sony xperia line but it's not sold in my country. The CAT smartphone meets the specs I want but it's expensive. I just replaced the battery in my g7 so I hope to get another year at least out of it and hopefully something better gets released.

[–] ShadowAether 2 points 2 years ago

Did apple shut down sideloading then? When I was testing on iOS, dev local installation was a pain but I would assume that's still an option if you enable the dev features

[–] ShadowAether 2 points 2 years ago (1 children)

I don't think they are any more "locked down" than the other vendors, usually they have a bunch more features on top of base android. I never used their store tho so idk if that's what you're refering to

[–] ShadowAether 4 points 2 years ago (2 children)

LG G7 Thinq. It had all the features I wanted and was a good price. I actually did look around a month ago and nothing else on the market compares even years after it came out (I could get close to what I wanted but I would need to go high end) which is a shame bc I'll have to switch for 5G at some point. Google and Samsung removed the 3.5mm jack so I'll never buy from them.

[–] ShadowAether 4 points 2 years ago

Um well the food communities seem to be active and drama-free. Plus the gardening communities. Neither of those topics is on your list. There are also a bunch of animal communities if you really want cat pics lol

[–] ShadowAether 3 points 2 years ago

Sometimes it's just like monkeys on typewriters

[–] ShadowAether 1 points 2 years ago (1 children)

Ya, I forgot I turned off automatic app updates after something similar with another app

[–] ShadowAether 2 points 2 years ago* (last edited 2 years ago) (3 children)

Idk but I logged in on an older version and it didn't log me out so maybe downgrade it then upgrade it? (Edit: seems like I'm still on 0.33)

[–] ShadowAether 3 points 2 years ago

I believe in you, let us know how it works out

[–] ShadowAether 2 points 2 years ago
[–] ShadowAether 3 points 2 years ago* (last edited 2 years ago)

That's true but even with 3.5, I find it takes multiple prompts to get it to give me the info I want. When if I put it in google, I'll find the reddit or stackoverflow post that's what I want or close. Like just yesterday I asked for how to do something without using sudo and the first command it gives me uses sudo...

Edit: Even using 4 through bing, I ask it for books or papers and 2/3 results are blog posts https://imgur.com/a/eI0bJL5

4
submitted 2 years ago* (last edited 2 years ago) by ShadowAether to c/learnmachinelearning
 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I have a dataset that contains vectors of shape 1xN where N is the number of features. For each value, there is a float between -4 and 5. For my project I need to make an autoencoder, however, activation functions like ReLU or tanh will either only allow positive values through the layers or within -1 and 1. My concern is that upon decoding from the latent space the data will not be represented in the same way, I will either get vectors with positive values only or constrained negative values while I want it to be close to the original.

Should I apply some kind of transformation like adding a positive constant value, exp() or raise data to power 2, train VAE, and then if I want original representation I just log() or log2() the output? Or am I missing some configuration with activation functions that can give me an output similar to the original input?

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I am working on a spatial time series analysis project. The task is to study the spatial distribution of point features (e.g., crime events, traffic accidents) over time. I aim to find the places with the following characteristics given the spatial distribution of point features across time:

  • places with consistently high-level concentration of point features
  • places with periodically high-level concentration of point features. "periodic" here might mean that this place only has a great number of point features during special events (e.g., ceremony and the national day)
  • places with suddenly high-level concentration of point features

I have used the Kernel Density Estimation method to compute the density of places across the study area through the study timeline. This way, I can get the time series of kernel densities for each location on the map, i.e., a matrix in which rows represent locations and columns denote the time. Then what's next? How can I statistically find places with a large number of point features but different temporal consistency levels over time? For instance, the following figure shows the spatial distribution of kernel densities of locations in New York City for four continuous periods (in total I have about 15 periods). The red color means the high kernel densities while the green color represents the low kernel densities.

I have tried to use the clustering techniques (e.g., KMeans and KShape) offered by tslearn package in Python to cluster time series of kernel density values of all the locations. But I can only differentiate them somehow visually. Are their any statistical methods to achieve this goal?

 

Join us here: [email protected]

7
Vol 9 Music (self.rwby)
submitted 2 years ago* (last edited 2 years ago) by ShadowAether to c/rwby
 

Anyone know when the soundtrack is coming out?

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

The more I read, the more I am confused as to how to interpret the validation and training loss graphs, so therefore I would like to ask for some guidance on how to interpret these values here in the picture. I am training a basic UNet architecture. I am now wondering if I need a more complex network model, or that I just need more data to improve the accuracy.

Historical note: I had the issue where validation loss was exploding after a few epochs, but I added dropout layers and that seems to have fixed the situation.

My current interpretation is that the validation loss is slowly increasing, so does that mean that it's useless to train further? Or should I rather let it train further because the validation accuracy seems to sometimes jump up a little bit?

8
submitted 2 years ago* (last edited 2 years ago) by ShadowAether to c/learnmachinelearning
 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

Im training an autoencoder on a time series that consists of repeating patterns (because the same process is repeated again and again). If I then use this autoencoder to reconstruct another one of these patterns, I expect the reconstruction to be worse if the pattern is different from the ones it has been trained on.

Is the fact that the sime series consists of repeating patterns something that needs to be considered in any way for training or data preprocessing? I am currently using this on raw channels.

Thank you.

 

cross-posted from: https://sh.itjust.works/post/48227

Presented on Wednesday, June 21 at 12:00 PM ET/16:00 UTC by Daniel Zingaro, Associate Teaching Professor at the University of Toronto, and Leo Porter, Associate Professor of Computer Science and Engineering at UC San Diego. Michelle Craig, Professor of Computer Science at the University of Toronto and member of the ACM Education Board, will moderate the questions and answers session following the talk.

 

cross-posted from: https://sh.itjust.works/post/58054

Some seem better than others but a new shampoo & conditioner bar I bought seems much worse.

 

Some seem better than others but a new shampoo & conditioner bar I bought seems much worse.

4
submitted 2 years ago* (last edited 2 years ago) by ShadowAether to c/learnmachinelearning
 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I got a data set from high performance liquid chromatography, because hplc is expensive we only got about 39 data point. Each data point is 9 dimension, representing 9different substances concentration. I tried different network and the accuracy is not higher than 50%. (We have four classes) however the KNN has a accuracy of more than 90%. I remember hearing that neural network is not good on small data set. Is this the reason? I have not tried svm or other traditional machine learning models yet. Should I try them if yes which one

 

Not OP. This question is being reposted to preserve technical content removed from elsewhere. Feel free to add your own answers/discussion.

Original question:

I'm being provided a dataset with several variables in it, and a success metric (1 or 0) at the end. I'm being asked to analyze the dataset and give insights on how to improve the success metric rate. To do this I intend to do a thorough data analysis to study correlations and relationships. However I'm also intending to run a logistic regression to confirm these correlations with the features coefficients.

My question is, if my sole interest is understanding the most important feature determining a metric, and not building a robust model, should I still split my datasets into 2 ? What benefits do I have splitting it ? Won't my exploratory analysis loose interest if I'm putting away - let's say- 20% ?

Thank you for your help

view more: ‹ prev next ›