InvolutedThymus

joined 2 months ago
MODERATOR OF
[–] [email protected] 1 points 1 month ago

Thank you for the insightful question! It allows me to emphasize the important distinction between ketosis and atherosclerosis. Foam cell formation, which is a key event in the development of atherosclerotic plaques, is not limited to individuals with risk factors for heart disease. There’s evidence showing that plaque precursors, including foam cells, can be found even in healthy adolescents. This suggests that the initial stages of atherosclerosis might occur as part of natural biological processes, but the progression to harmful plaque formation depends on various factors, including lifestyle, genetics, and environmental triggers.

As for ketosis, the metabolic state is designed to utilize fat as a primary energy source. In individuals with excess body fat or those who have unfavorable lipid profiles, entering ketosis allows them to metabolize their more abundant fat stores, often resulting in improved lipid biomarkers and overall metabolic health. However, in individuals with a leaner, "healthy" phenotype and lower fat reserves, the situation is different.

When these individuals enter ketosis, if their body fat is below a certain threshold, their body may need to either mobilize fat from existing stores or synthesize new fats to provide substrates for ketone production. This process can lead to a temporary or sustained worsening of lipid biomarkers as the body shifts its fat metabolism pathways. This difference in how ketosis is utilized likely explains the variation in biomarkers you mentioned between individuals with different body compositions.

 

Neprilysin is an enzyme responsible for breaking down several key peptides, including B-type natriuretic peptide (BNP). BNP is a hormone released primarily from the heart’s ventricles in response to increased pressure and fluid volume. It acts as a signal to the kidneys to help reduce this overload by promoting diuresis (pee out fluid/volume) and natriuresis (pee out sodium). BNP also helps relax blood vessels (vasodilation), which lowers the overall stress on the heart.

The Evolution of BNP: Designed for Acute Response From an evolutionary standpoint, BNP’s role in fluid regulation likely evolved to address acute situations, such as sudden increases in blood volume or salt intake. For our ancestors, these situations might have been rare but critical, like consuming large amounts of water after periods of dehydration or reacting to sudden trauma or infection that caused fluid shifts.

BNP’s ability to trigger diuresis (fluid removal) and natriuresis (sodium removal) was a short-term emergency mechanism. This process worked well in the past because acute volume overload—a temporary state of excess fluid or salt—was something the body could quickly resolve. The kidneys would sense the signal from BNP, recognize the excess fluid, and excrete it efficiently.

Modern Challenges: Chronic Heart Failure and BNP’s Diminished Effectiveness In modern times, we face a different challenge: chronic heart failure (CHF). In CHF, the heart is under long-term, sustained pressure, often due to weakened heart muscle or other chronic conditions. BNP is still produced, but the problem is that the body begins to adapt to the constant signaling. The kidneys, which evolved to respond to acute signals from BNP, start to interpret the continuous signal as erroneous or irrelevant. Over time, this leads to resistance to BNP’s effects, making it harder for the body to rid itself of excess fluid and sodium.

It’s not that BNP stops working entirely. Rather, the chronic nature of heart failure causes the kidneys to “tune out” the signal because they’re designed to handle short bursts of fluid overload, not prolonged stress. As a result, even though the body wants to diurese (pee out the excess fluid), it can’t because the kidneys mistakenly believe that the persistent BNP signal is not a true indicator of overload. The acute system becomes ineffective in this chronic setting.

Measuring and Trending BNP BNP and its more stable precursor, NT-proBNP, are measured using a blood test that helps assess heart function. In heart failure, the heart releases more BNP in response to increased pressure and fluid overload. The test measures the concentration of these peptides in the blood, providing critical insight into whether the heart is under stress.

The blood test involves drawing a sample from a vein, typically from the arm. The sample is then processed using immunoassay techniques, which detect and quantify the levels of BNP or NT-proBNP in the blood. The results are reported in picograms per milliliter (pg/mL), with higher values indicating more significant cardiac strain.

Why Is BNP Testing Important? The BNP test is particularly valuable for diagnosing heart failure. In emergency settings, when a patient presents with shortness of breath or other symptoms of fluid overload, measuring BNP can help clinicians quickly differentiate between heart failure and other conditions like lung disease. BNP levels rise in heart failure but remain relatively normal in non-cardiac causes of respiratory symptoms.

Beyond diagnosis, BNP levels are also useful in monitoring and trending heart failure over time. Clinicians often track BNP levels to assess how well the patient is responding to treatments like diuretics, ARBs, or neprilysin inhibitors. Trending BNP downward generally indicates effective fluid management and reduced cardiac strain. Rising levels may signal worsening heart failure or the need for more aggressive intervention.

BNP and NT-proBNP also have different half-lives, with NT-proBNP being more stable in the bloodstream. Therefore, in many clinical settings, NT-proBNP is used as it provides a longer window to assess the heart's function. Clinicians decide which peptide to measure based on the context, but both provide valuable, quantitative information that guides treatment decisions.

Neprilysin Inhibitors: A Modern Solution This is where neprilysin inhibitors like sacubitril come into play. By blocking neprilysin, the enzyme responsible for breaking down BNP, these drugs allow higher levels of BNP to circulate in the body. This essentially amplifies BNP’s natural effect, helping to promote diuresis, natriuresis, and vasodilation even in the context of chronic heart failure. The idea is to overcome the body’s adaptation to BNP by increasing its presence, essentially “turning up the volume” of the signal to the kidneys.

When combined with angiotensin receptor blockers (ARBs), which reduce the pressure on the heart, neprilysin inhibitors provide a much-needed boost to the natural BNP system, allowing it to perform its intended role even under the chronic conditions of heart failure.

[–] [email protected] 1 points 1 month ago

Delish! Thanks for the Qs

“Aren’t the shorter-lived strains functioning under genetic pressures in order to be short-lived?” They weren’t intentionally bred to be short-lived. It’s more of an unintended consequence. The goal was to create a docile, general-purpose lab mouse, and in the process of enriching for these traits, genetic diversity decreased. This reduction in diversity inadvertently shortened the lifespan in certain strains.

“From a research perspective, this makes sense as conducting studies through end-of-life would be more exhaustive if longer-lived strains were used.” I see your point but the actual difference in lifespan is only about 0.5 to 1 year—so not as big a difference as it might seem when considering the added effort for end-of-life studies or even just dealing with the mice that have several more months of health/life. To take your numbers, it would only be 110 days which is less than half a year.

“Outside of longevity, it would be better to use short-lived models.” Not necessarily. For example, heart disease is heart disease, and you don’t need to artificially impose unrelated lifespan limits to study it effectively. Long-lived models can still provide meaningful data on a variety of conditions without the confounding factor of an "unnaturally" short lifespan.

“Any intervention would undoubtedly help a short-lived strain…” That depends. For instance, if a strain is highly susceptible to cancer, interventions targeting cancer might extend its lifespan. However, if the strain tends to die of kidney disease, cancer therapeutics won’t affect longevity. The effectiveness of an intervention varies depending on the underlying/predominant cause of death in these strains.

“It would essentially be undoing years of genetic constraints that caused them to be short-lived in the first place.” Exactly—this is what I was getting at. In the study we’re discussing, the intervention not only had to counteract these added genetic or environmental stresses but also extend lifespan beyond the norm for long-lived strains. That’s what makes the result more meaningful in a way.

“There seems to be an invisible, yet squishy ceiling on lifespan up to a certain age with interventions…” The point I was trying to make was that the gene therapy in this case surpassed both the softer and harder limits you are referring to, suggesting that the therapy had a significant impact not just on addressing the deficits these animals had but also pushed these shorter lived animals past the hard ceiling for longevity set by the (theoretical) long-lived controls.

 

Lipemia to Ketosis: A Historical Journey Through Fats in the Blood Let’s take a deep dive into the world of fats in the blood, starting with the classic historical experiments on lipemia and ending with how ketosis and exogenous ketones fit into the picture of modern metabolic health. .


Lipemia: The First Observations of Fat in the Blood The story of lipemia—the condition where blood becomes milky or cloudy due to high levels of fat—begins in the early days of lipid metabolism research. Scientists noticed that after a fatty meal, blood drawn from patients sometimes had a creamy layer floating on top. This layer turned out to be chylomicrons, large fat particles that carry triglycerides (Fat molecules) from the intestines into the bloodstream after eating.

In other cases, blood became uniformly turbid or milky—this was due to smaller fat particles (like VLDL and triglycerides) staying suspended in the liquid portion of the blood. These early observations of lipemia led to breakthroughs in understanding how fats travel through the body, why some people have abnormal lipid levels, and the role of these fats in metabolic disorders.


Lipemia and Cardiovascular Risk: The Link to Atherosclerosis From these early experiments, researchers learned that high triglycerides, the primary cause of lipemia, were linked to a higher risk of cardiovascular disease (CVD). When blood contains high levels of triglycerides, it also tends to carry more VLDL particles, which are rich in fats and cholesterol. Over time, this can contribute to plaque formation in the arteries.

LDL Oxidation and Foam Cells: As LDL ("bad cholesterol") circulates through the blood, it can become oxidized, especially in people with elevated triglycerides or metabolic disorders. Oxidized LDL is engulfed by macrophages (immune cells) that turn into foam cells, which form the basis of atherosclerotic plaques.

Inflammation and Endothelial Dysfunction: The accumulation of foam cells and fatty plaques in the arteries triggers inflammation, which further damages the arterial walls, eventually leading to the narrowing of blood vessels. This is the foundation of atherosclerosis, a key driver of heart attacks and strokes.


Ketosis: The Body’s Natural Fat-Burning Mode While lipemia deals with fats after digestion, ketosis is the body’s way of shifting to fat burning as a primary energy source. When carbohydrates are restricted, as in a keto diet or during fasting, the liver breaks down stored fat into fatty acids, which are converted into ketones. These ketone bodies (like beta-hydroxybutyrate) are then used as fuel in place of glucose.

The metabolic benefits of ketosis include:

Fat loss: By burning stored fat for energy, ketosis can lead to significant weight loss.

Lower triglycerides: Over time, ketosis can reduce the amount of triglycerides in the blood, which helps improve cardiovascular health. (Caveat here is that post meal, which generally have a higher fat content for keto practitioners, levels will be transiently higher. Fasted blood monitoring is very important as will be discussed later.)

Improved insulin sensitivity: Ketosis reduces the body’s reliance on glucose, which can enhance insulin sensitivity and help manage conditions like type 2 diabetes.


Exogenous Ketones: The Shortcut, or Snake Oil? Exogenous ketones are supplements that raise blood ketone levels without needing to follow a strict ketogenic diet. While they may temporarily increase ketones in the blood and provide a quick source of energy, the true metabolic benefits of ketosis—like fat loss and improved metabolism—come from the body producing its own ketones by breaking down stored fat.

Here’s the difference:

What Exogenous Ketones Do:

They raise blood ketone levels quickly and may provide mental clarity or boost athletic performance--in someone that needs an energy source! As in calories!! If you are experiencing the “Keto crash,” exogenous ketones are useful in short bursts for quick energy while allowing you to remain compliant with your macros.

What They Don’t Do:

They don’t trigger the body to burn fat for fuel, so they don’t promote fat loss on their own.

They don’t replicate the long-term benefits of being in natural ketosis, which is driven by metabolic changes like improved fat metabolism and lower insulin levels.

While exogenous ketones can be useful for athletes or those in the keto trenches seeking a temporary cognitive respite from the keto fog, they’re often overmarketed as a way to achieve the benefits of ketosis without diet changes.

I’m exercising a lot of restraint here cuz ya boi wants to go off! Can’t believe I managed to only imply that it is snake oil once in the title and then again here. That's it. Good job me!


Connecting Lipemia, Ketosis, and Cardiovascular Health At the intersection of lipemia and ketosis lies a key question: How do these metabolic processes influence cardiovascular risk?

Lipemia and CV Risk: Elevated triglycerides, as seen in lipemia, are linked to a higher risk of atherosclerosis and cardiovascular events. When the blood is overloaded with fat, it contributes to plaque formation and arterial damage.

Ketosis and Lipid Profiles: Natural ketosis, achieved through diet or fasting, can reduce triglycerides and potentially improve LDL/HDL ratios. However, some people may experience a rise in LDL cholesterol on a ketogenic diet, so it's important to monitor lipid levels carefully.


In Summary: Be safe out there y’all. I’m shouting these truths cuz I luv ya.

No need to break your wallet with exogenous ketone products. Keto can be great for the right people but please be careful. Don’t get too comfortable out there with elevated lipids even if your doctor is ‘hip’ and says “we don’t really know long term effects of keto…” BAH! There is strong evidence for lipemia and cardiovascular risk, you know, heart attacks and strokes.. Take a statin or something.

With love,

InvolutedThymus ~XOXO

 

Troponins: The Heart of Cardiac Diagnostics

Troponins are a big deal in medicine, especially when it comes to diagnosing heart attacks. Let's break down the basics of troponins, how they work in the body, and how we detect them in the lab.

What Are Troponins? Troponins are proteins involved in muscle contraction, found in both skeletal and cardiac muscle. They’re part of the troponin-tropomyosin complex, which helps regulate muscle contraction by interacting with actin and myosin.

There are three types of troponins:

Troponin C (TnC) – Binds to calcium, triggering muscle contraction. Troponin I (TnI) – Inhibits muscle contraction in the absence of calcium. Troponin T (TnT) – Anchors the troponin complex to the muscle fiber.

For cardiac diagnostics, we focus on cardiac-specific troponins: Troponin I (cTnI) and Troponin T (cTnT). These versions are unique to the heart muscle, making them incredibly important markers for detecting cardiac injury. When the heart is damaged, like during a heart attack, these cardiac-specific troponins are released into the bloodstream, signaling that heart cells are in distress.

Why Are Troponins Important? Elevated troponin levels are a key indicator of myocardial injury (damage to the heart muscle), most often associated with acute coronary syndrome (ACS), which includes heart attacks. When heart cells are damaged due to ischemia (lack of oxygen), they break open, and troponins leak into the blood.

Troponin tests are the gold standard for diagnosing heart attacks because they are sensitive (they detect even small amounts of damage) and specific (they’re associated with cardiac tissue, not other types of damage).

This specificity is very important because in the ED a panic attacks and even gastric reflux can present very similarly to heart attacks. The lab is crucial to making sure we catch the cases that even the most experienced clinicians miss!

How Are Troponins Detected in the Lab? The process of measuring troponins in the lab is highly standardized and involves several steps, starting with blood collection and ending with immunoassay analysis:

Blood Collection and Processing:

Troponin tests are typically done using either the green top (heparin) or lavender top (EDTA) tubes, depending on the lab. These tubes prevent blood from clotting, allowing for plasma testing. Which is faster than allowing the blood to clot (serum testing). Speed here is important for obvious reasons. (Technically both plasma and serum can and are used to measure troponins, but plasma testing is more common because it allows for quicker processing—there’s no need to wait for the blood to clot.)

After collection, the sample is centrifuged to separate the plasma from the blood cells and transferred to a sample cup for testing by Immunoassay which involves specific antibodies designed to bind to cardiac-specific troponins (either cTnI or cTnT). When these antibodies bind to troponin in the blood sample, they trigger a signal (usually via chemiluminescence or fluorescence) that can be measured. The strength of the signal corresponds to the amount of troponin present in the blood. This value is then compared to a reference range.

Each lab sets a reference range based on the assay, but in general, elevated levels of troponins suggest myocardial damage. The rise and fall pattern of troponin levels is key in diagnosing a heart attack—you’re not just looking for elevated levels, but how they change over time.

Timing: Troponins typically rise within 3-6 hours after a cardiac event, peak at 12-24 hours, and can remain elevated for days. This pattern helps clinicians confirm the presence and extent of cardiac injury.

Other Markers for Myocardial Infarction In addition to troponins, other markers may be measured for myocardial infarction (MI), though they’ve largely been replaced by troponins due to their higher sensitivity and specificity:

Creatine Kinase-MB (CK-MB):

CK-MB is an enzyme found in heart muscle and was used widely before troponins became standard. CK-MB rises and falls more quickly than troponins, which can be useful in detecting a reinfarction shortly after a heart attack. Many old school cardiologist swear by this marker!

Myoglobin:

Myoglobin is a small oxygen-binding protein that rises rapidly after muscle damage, but because it’s also found in skeletal muscle, it’s not specific to the heart. However, it can be an early marker for cardiac damage due to its rapid rise.

BNP/NT-proBNP:

These markers are released by the heart in response to stretching from heart failure. While not specific for heart attacks, elevated BNP levels can help differentiate between heart failure and other causes of similar symptoms, such as shortness of breath. More on this in a future post!

Feel free to drop any questions or thoughts below! Let’s chat about how these tests have changed modern cardiology and what’s next in cardiac biomarker research!

[–] [email protected] 4 points 1 month ago

TIL what a true phantom is. Neat!

I am not terribly adjacent to radiology but do find this niche product fascinating, Thank you!

[–] [email protected] 2 points 1 month ago (2 children)

I’d like to add something to this discussion, but first, I want to acknowledge that all of these points are correct, important, and well taken. That said, a subpar control doesn’t necessarily equate to a subpar study or suggest that an intervention isn’t worth getting excited about. What the "900-day rule" indicates is what the expected median lifespan of a healthy control group should be. Control groups that fall short of this 900 day benchmark are facing an additional stressor (genetic or otherwise) that is negatively affecting their longevity. When comparing the experimental arm to these controls, the conclusion must include that the intervention is at least partially increasing their lifespans by counteracting these added stressors.

So, a simple way around throwing the entire study out would be to compare the experimental arm to a theoretical 900-day cohort. If the intervention group has a median lifespan of around 37 months, that translates to 1,125 days—about a 25% increase over the theoretical, normal, healthy 900-day control group. Yes, 25% is less dramatic than 41%, and it may not be as robust as some rapamycin results, but it is still a significant increase in longevity compared to both a healthy control group and the in-study control that shows evidence of stressors affecting all mice.

I argue that the utility of the "900-day rule" isn’t to dismiss studies that don’t meet this benchmark, but rather to provide another metric to aid in our interpretation of the data.

Great post!

 

Why the Interventions Testing Program (ITP) Is the Gold Standard for Longevity Research

https://www.nia.nih.gov/research/dab/interventions-testing-program-itp/supported-interventions

If you’re following the latest trends in longevity research, you’ve probably heard about a few "miracle compounds" like metformin, resveratrol, and NAD boosters. But let’s get real—when it comes to real preclinical data that actually holds up, nothing beats the Interventions Testing Program (ITP). And no, we’re not here to play along with the hype around these compounds that often don’t deliver in the lab.

What Is the ITP and Why Should You Care?

The Interventions Testing Program (ITP), run by the National Institute on Aging (NIA), is a uniquely well-designed testing platform that focuses on evaluating the effects of different compounds on lifespan and healthspan in outbred mice (meaning they have genetic diversity more like human populations, unlike the typical inbred strains). This gives the data more translational potential—interventions likely to work in humans are more likely to show efficacy in these outbred, well-powered experiments. This means the results aren’t confined to niche genetic conditions. The studies are conducted across multiple independent sites to ensure reproducibility and robustness of the data. Large sample sizes and rigorous methodology make this program a powerhouse for reliable preclinical results. If an intervention can work here, especially across different sites and genetically diverse mice, it’s hitting on something fundamental, not just a quirk of a specific strain or environment.

Metformin, Resveratrol, and NAD: Why We’re Not Impressed

Let’s get into why some of these so-called longevity wonder drugs don’t quite live up to their reputations:

Metformin: While metformin has shown promise for managing glucose levels in type 2 diabetes, the data for longevity just doesn’t hold up. Large human clinical studies have failed to show any clear lifespan-extending benefits in healthy individuals. The ITP didn’t find significant longevity effects either, so while it’s a great tool for diabetes management, it’s not the fountain of youth people hoped for. Move on former mentor of mine I won't be naming.. move on..

Resveratrol: Remember when resveratrol was the darling of longevity science? With its cough "too-good-to-be-true" lab data, it got people excited about its potential anti-aging properties—until the ITP results showed no significant impact on lifespan... And labs that weren't financially invested didn't show impact on lifespan... And then the original lab results were shown to be a bit too nice… and then... Ya, it's all BS. Go check retraction watch or some other forum that hunts for data manipulation. The Red flags are R-E-D and its not from a wine stain...

NAD Boosters: NAD boosters are the latest fad, but here’s where it gets funny: the same people (person... but let's say "people" for legal reasons) behind resveratrol are now pushing NAD as the next big thing. Fool me once with resveratrol, fool me twice with NAD? The ITP currently hasn’t looked at any NAD boosters other than Nicotinamide Riboside (NR) but that showed no impact and operates in the same pathway as NMN and NAD. I won't be holding my breath. Until compelling work of ITP caliber shows that boosting NAD+ levels leads to a longer life, it’s best to be skeptical until more solid data emerges.

Rapamycin and Acarbose: The True Stars of the ITP

Now let’s talk about the real heroes of the ITP: rapamycin and acarbose. These are the compounds you should be hyped about, and the data to back it up is strong.

Rapamycin: If there’s one compound that’s consistently shown to extend lifespan, it’s rapamycin. Rapamycin works by inhibiting the mTOR pathway, which is a core regulator of growth, metabolism, and aging. What’s exciting about rapamycin is that it works not just in mice, but across a broad range of organisms—C. elegans, fruit flies, yeast, and even mammals. This isn’t some niche treatment that only works in tightly controlled conditions—it targets a fundamental mechanism in aging. It’s the real deal.

Acarbose: Often overshadowed by rapamycin, acarbose has also shown significant lifespan extension in the ITP, especially in male mice. Acarbose works by slowing carbohydrate digestion and preventing large post-meal glucose spikes. And here’s the kicker: rapamycin and acarbose seem to work synergistically—they’re even more effective together than they are alone, suggesting they target different but complementary pathways involved in aging and metabolism.

Both these drugs have shown impressive lifespan extension through the ITP and they play together nicely as well! See here: https://pubmed.ncbi.nlm.nih.gov/36179270/.

Anyway, cool list of drugs right? Anything on the in process list that catches you eye? Any surprises revelations? What should be next?

10
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Let’s Talk Blood Collection Tubes and What the Colors Mean!

Ever wonder what those different colored tops on blood collection tubes actually mean? Each color tells us something important about what’s inside the tube and how the blood will be processed. Let’s break it down, adding some cool science along the way!

Red Top – No Additive Your blood is your blood! This tube contains no additives. A key property of blood is clotting—when exposed to air (more accurately activating factors), blood naturally forms a clot as platelets and clotting factors work to seal off a wound. In the lab, we let this clotting happen. As the blood clots, it traps certain components, like cells and proteins, forming a pellet. What’s left behind is serum, which is the liquid portion of blood without cells or clotting factors. This is used for tests like liver function or kidney panels where serum chemistry is important.

Gold or Tiger Top (SST) – Serum Separator Tube Think of this as “Red with a plug!” This tube has a gel that sits between the clot and the serum after spinning the sample in a centrifuge. The gel forms a barrier, making it easier for lab techs to separate serum from the clot—no extra pipetting required. It’s a time-saver in the lab, used for tests similar to those with the red top.

Light Blue Top – Sodium Citrate This tube stops clotting by binding calcium, a key player in the clotting cascade. Without calcium, the cascade can’t complete, so the blood remains unclotted. This is crucial for coagulation studies, like PT/INR (Prothrombin Time/International Normalized Ratio) and aPTT (Activated Partial Thromboplastin Time), which measure how well your blood is clotting. These tests are essential for monitoring patients on blood thinners or those with clotting disorders. The light blue tube is all about studying the clotting process without letting it happen in the tube.

Lavender or Purple Top – EDTA (Ethylenediaminetetraacetic acid) EDTA works similarly to sodium citrate by binding calcium, but it’s used for a different purpose. While citrate prevents clotting for coagulation tests, EDTA is perfect for preserving the shape of blood cells. This makes it essential for tests like Complete Blood Counts (CBCs), where we’re looking at the size, shape, and number of blood cells under a microscope. EDTA doesn’t alter the cells, allowing for accurate analysis, whereas citrate is mainly interested in the clotting factors themselves.

Gray Top – Sodium Fluoride/Potassium Oxalate Here’s where we get a bit more technical. This tube contains sodium fluoride, which inhibits enolase—a key enzyme in glycolysis, the process by which cells break down glucose for energy. Following Le Chatelier's principle, by blocking enolase, the reaction shifts away from breaking down glucose, keeping it stable for measurement. This is why the gray top is used for glucose testing. It also contains potassium oxalate, which further prevents clotting. Additionally, it’s often used for lactate testing, important for understanding conditions like sepsis.

Light Yellow Top – ACD (Acid Citrate Dextrose) This one’s a bit more niche but fascinating. ACD is used in blood banking, especially for DNA testing or preserving blood for extended periods. The citrate prevents clotting by binding calcium, and the dextrose acts as a sugar source to keep the cells happy and viable for a longer shelf life. In donated blood, this mix helps preserve the cells for several weeks, making it critical for blood storage and transfusion medicine.

Pink Top – EDTA (Like Lavender) You might wonder why we have pink when we already have lavender. Pink tops are specifically designated for blood banking, like crossmatches and blood typing. While it uses the same EDTA as lavender, the different color coding helps labs differentiate between general hematology tests (lavender) and blood banking tests (pink), ensuring the right sample goes to the right department.

Dark Blue (Royal Blue) Top – Trace Element-Free This tube is special because it’s trace element-free. The tube itself is rigorously cleaned and prepared to avoid contamination from metals like zinc, copper, or lead. This is critical when testing for trace elements, where even a tiny contamination from the tube could alter the results. Some versions have EDTA, while others are additive-free, depending on the test being performed.

Black Top – Sodium Citrate (Like Light Blue) Similar to the light blue top, this tube also contains sodium citrate, but with a different citrate-to-blood ratio. It’s used for the ESR (Erythrocyte Sedimentation Rate) test, which measures how quickly red blood cells settle in a test tube over an hour. ESR can give clues about inflammation in the body, making it useful for diagnosing conditions like autoimmune disorders or infections. The black top tube’s ratio is optimized for the slow sedimentation process.

Orange Top – Thrombin This tube contains thrombin, a clotting agent that accelerates the clotting process. It’s used when you need serum quickly for emergency chemistry tests, like in the case of urgent cardiac enzyme tests. The thrombin speeds up the clot formation so the serum can be separated faster than in a red or gold top tube.

Green Top – Heparin This guy contains heparin, an anticoagulant that prevents clotting by inhibiting thrombin and other clotting factors. This tube is used for plasma testing in clinical chemistry labs, especially for tests like electrolytes, ammonia, and troponins. Heparinized plasma allows for quicker processing because the blood doesn’t need to clot before being tested, making it ideal for urgent tests, such as when ruling out acute myocardial infarction (MI).

Serum vs. Plasma - a reCAP

Serum is the liquid portion of blood after it has clotted. When blood clots, the cells and clotting factors form a clot, leaving behind a clearer liquid—serum. This is what you get in tubes like the red or gold top (with no additives or a serum separator). Serum contains everything plasma does except the clotting factors, which are trapped in the clot. It’s often used for chemistry panels or hormone tests.

Plasma is the liquid portion of blood before clotting. It’s what you get when you use tubes like the green top (heparin) or lavender top (EDTA) that prevent blood from clotting. Plasma contains water, proteins, electrolytes, hormones, and clotting factors (because the blood hasn’t clotted yet). It’s ideal for tests where you need to measure things like troponins or electrolytes quickly since the sample doesn’t need to clot first.

And there you have it—a breakdown of the colorful world of blood collection tubes and their specific uses. The next time you’re in a lab or having blood drawn, you’ll know exactly what those tubes are doing and why they’re so important!

Have any other lab-related questions? Let’s chat about them below!

2
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Full disclosure, this is outside my area of expertise (whatever that means…).

I want to talk about the thymus and its importance in aging. I recently came across a fascinating paper that builds on a model of human lymphopoiesis across development and aging, and I wanted to share it with you all: (https://pubmed.ncbi.nlm.nih.gov/38908962).

The thymus plays a key role in the immune system, especially in the production and maturation of T-cells, which are crucial for immune responses. One of the things that really piqued my interest is how the paper discusses developmental transitions in the thymus and how these changes potentially affect the immune system throughout life. It’s especially interesting how thymic involution with age may impact immune health, and how this could tie into the overall aging process.

To me, it's wild that the thymus pretty much "dies" before we’re even out of our teens... Seriously, look at Figure 5. This idea has kept me up at night for about a decade now. Anyway, I’m in a transition phase of my career and am fortunate enough to have the latitude to start thinking deeply about the thymus. So let’s chat—we can struggle through my learning phase together!

While I’m still learning the specifics, this got me thinking about the potential implications for therapies aimed at rejuvenating or maintaining thymic function in older individuals. Could these interventions help us preserve immune function as we age? You ever hear of this guy Greg Fahy? Interesting person. He has a fascinating publication history in the area of cryopreservation (another field I want to dive into, and we should totally discuss), but he’s also attempting to rejuvenate the thymus. Here’s one of his papers: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6826138.

Honestly, I’m not sure where I stand on this yet. I find the hypothesis really interesting, but I’m in no way an immunologist. I’d love to hear your thoughts! If you’ve worked in this space or know of any relevant research, feel free to share. And if you haven’t but have a hot take, I’d love to hear that too! No barrier to entry—feel free to open this up.

What areas in aging or immunology are you curious about? What do you think will get us to 130+ years on this planet!?

[–] [email protected] 2 points 2 months ago (1 children)

My suspicion is that a big part of this discussion revolves around integrating vs. non-integrating gene therapies, so let’s start there.

At a high level, viral gene therapies use a viral vector (the capsid or container) to deliver a genetic payload into a cell. That payload can then integrate into the host’s genome if the lysogenic machinery is part of the payload—but that doesn’t necessarily have to be the case.

Plasmid therapies, on the other hand, involve non-chromosomal DNA that stays outside the host’s genome but can still express the proteins it encodes independently. In most cases, plasmids don’t come with the machinery that promotes integration with the host genome, but it’s not an absolute safeguard against integration either. Additionally, a plasmid cargo still needs a vector to gain access into the cell.

In the follistatin gene therapy paper, the authors use a plasmid (the genetic cargo) to encode instructions for follistatin. They deliver this via polyethyleneimine (PEI), a cationic polymer that helps get the plasmid into cells—so PEI acts as the vector here, instead of a virus.

Now, the superiority of either approach really depends on the use case:

Integrating Gene Therapies (more commonly viral-based, since many naturally have integrating machinery that can be included as part of the cargo) are ideal when you want a one-time, permanent fix—for example, in conditions like sickle cell anemia, where a single gene mutation needs to be corrected. In this case, you’d want the therapeutic gene to integrate into the genome for long-term expression and potentially a cure with just one treatment.

Non-integrating Therapies (more commonly plasmid + non-viral vector based) are ‘better’ when you want temporary gene expression. For example, if you're priming the body to fight a new pathogen or delivering a protein with a temporary therapeutic effect, plasmid-based therapies are argued to be more practical. These are also great for delivering proteins that need short-term action but shouldn’t stick around indefinitely, especially if there’s a risk of side effects from prolonged exposure.

That said, I don’t see why viral/non-plasmid strategies couldn’t do these things as well. In fact, many such strategies are in development.

Other Considerations for Viral vs. Plasmid-Based Therapies: Viral Vectors: These also come with higher risks like immune responses, insertional mutagenesis (which can potentially lead to cancers), and limited payload sizes. There are some neat solutions to these in the research sector that we should chat about in the future.

Plasmid Vectors: Generally less immunogenic, but they offer shorter-lived expression, meaning you might need repeated doses to maintain effects. The big benefit in my opinion is they deliver a much larger payload when compared to viruses. Not relevant if you are aiming for a single gene therapeutic but I feel it's the big draw.

Now, About the Follistatin Paper... I’ll hold back some of my critiques of the paper that are beyond the scope of your question, but let me address the safety aspects they mention:

Inherently Transient Expression: This is generally true for plasmids since they don’t integrate into the genome. However, I’m cautious about saying this is 100% guaranteed. There’s always a small risk of integration, even with non-integrating strategies, although the probability is low.

Drug-Inducible Reversibility: The paper mentions this, but it’s not clear how exactly they plan to achieve it. They didn’t include details about the plasmid construct or any antibiotic kill switch, which would be crucial to back up their claim. If such a switch were tied to any potential integrations, in theory, it could allow them to kill off any cells where integration occurred—but more details are needed here. This strategy also isn’t 100% effective, by the way.

Excision of Transfected Tissue: This one made me laugh a bit—“Oops, we made a tumor—CUT IT OUT!” Brilliant and novel, guys. Thanks for mentioning it. While theoretically possible, it doesn’t seem like a reasonable safety net for a clinical approach. Given that cancer development is one of the big concerns with these therapies, and cancer is notoriously slippery, this doesn’t offer much reassurance.

In my opinion, the advantages of plasmids mentioned in the paper could also apply to viral vectors.

So, Where Do I Stand? Both viral and plasmid approaches have their place, and the choice really depends on the situation and how the technology evolves. I suspect that in the long term, viral vectors will be the better choice, despite their risks. There’s a lot of work going into custom capsid design, which will allow for specific targeting and immune evasion. I think the idea that plasmid-based therapies are "safer" may be leading to a false sense of security.

That said, I’m definitely flirting with both. Can you ask me again in 5 years? Maybe 10?

What are your thoughts?

[–] [email protected] 2 points 2 months ago

Great question! I don’t want to downplay the utility of multiplex PCR—we have in-house panels that we frequently rely on. However, there are two key drawbacks: cost and breadth. The reagents for these assays are quite expensive, and they can only detect what is on the panel, which is dictated by the species-specific primers. We use the BioFire system here, which you can look up if you’re curious about the panels. Another sequence-based option would be using assays like Karius (also Google-able), which is an unbiased approach that detects microbial cell-free DNA and attempts to match it to a library. When it first came out, Karius was supposed to revolutionize infectious disease diagnostics but failed to gain strong footing due to its cost, turnaround time, and the ambiguity of the data you get back.

MALDI-TOF proteomics is the gold standard because it’s fast, cost-effective, and requires minimal sample preparation compared to sequencing.

MALDI-TOF is not highly targeted in the sense of picking specific proteins of interest. Instead, it generates a broad mass spectrum “fingerprint” of all the proteins (primarily ribosomal proteins) present in the organism (we can do fungi too). The key is that the spectrum is matched against a reference database of known profiles. So, it’s a comparative method, rather than specifically aiming for certain proteins. The spectra tend to be consistent and reproducible for each species, which is why it works so well for identification. The reference library is massive and constantly growing with more samples, so generally speaking, you are not restricted to a panel of select organisms (there are caveats to this, but you know, generally speaking).

Typically, there are about 10-20 prominent proteins, with most of these being small, abundant proteins like ribosomal proteins. These are what the machine "sees" best and uses to generate the profile. It’s not that we have ‘proteins of interest’ per se—it’s more that each organism presents a predictable set of proteins to the MALDI. If we know that, we can identify the organism. For many organisms, they present ribosomal proteins, which is convenient because ribosomes are a classic marker for identifying organisms through speciation. However, some organisms present other proteins as well.

Let me know if you have any other questions!

[–] [email protected] 3 points 2 months ago

Yes, that is exactly why! The safety concerns around using spectrometry for anthrax primarily stem from how the samples are handled and prepared. Nuance incoming!

The dogma in our lab is that mass spectrometry, especially MALDI-TOF, involves creating an aerosol or vapor from the sample, which could potentially release live spores or other dangerous particles into the environment. In the case of anthrax, because it’s a highly infectious pathogen, this aerosolization could pose serious biohazard risks if the spores aren’t completely neutralized.

In reality, it's much more likely that the true concern lies in the upstream processing. In fact, many labs have the capacity to, and ultimately do, run anthrax samples on the MALDI. This is because the samples are chemically deactivated with reagents like trifluoroacetic acid and α-cyano-4-hydroxycinnamic acid, which also aid in the production of adduct ions that are ultimately detected by the machine.

A key difference between most hospital microbiology labs is the biosafety classification. At my location, for example, the only part of the lab that is rated Biosafety Level (BSL) 2 is the mycology suite. To handle anthrax safely, you would want manipulations performed in a BSL-3 lab within a class 2 safety cabinet, which is what the reference labs would do. Then, once the sample is inactivated, they proceed to MALDI. In hospital labs, we usually limit our manipulations of possible anthrax and therefore use quick assays to rule it out. If we can’t, we send it to other labs... through the mail... there may be a dark joke somewhere in there.

Fun fact: most of Robert Koch’s (a, if not the, father of germ theory) early work was actually with the anthrax bacillus, long before our BSL equipment existed!

 

Hello Mainlined Science! We’re always looking for new topics and ideas to dive into, so we’d like to start getting some engagement! Got a question, a research area you’re curious about, or just something science-related that you’ve been pondering? Let’s talk about it! Even if it's outside our fields, no wrong answers!

Whether it’s a specific field you want to explore or a “random thought of the day,” feel free to start up the chat, we will reply. We’re here to start discussions, share knowledge, and learn together. Drop your ideas below—there are also no wrong questions!

I’m also thinking about occasionally hosting some hypothesis-generating sessions, starting small research projects, and maybe even setting up a little DIY lab. If there’s interest of course.. Maybe we could get into some 3D printing or simple bio experiments too! Let's see where this can go. Let's get the hive mind goin!

 

Ok, so I had a patient. The actual history isn't terribly important because this sort of thing happens relatively frequently, but to give you a quick one-liner: he was an older male with rheumatoid arthritis admitted for Staph bacteremia. In cases of blood infections, we order tests called "clearance cultures" to track and confirm that the organism we're fighting disappears with treatment. In this case, 1 out of 4 of these samples tested positive for a potential Bacillus species—the genus to which anthrax belongs. That being said, completely inert species of Bacillus are common contaminants in this setting, and the fact that only 1 out of 4 samples tested positive definitely makes you think this is such a case of contamination.

However, we treat it as if it were anthrax until we're completely certain it isn't. It's Schrödinger's anthrax! After all, you don’t want to be the lab that missed anthrax.

Bacillus anthracis Identification Colonies of B. anthracis appear non-hemolytic, consist of gram-variable rods with spore forms, and are non-motile. In other words, when grown on sheep's blood agar, they do not break down hemoglobin (a feature many microorganisms possess), appear elongated and purple or pink under a microscope after staining (gram-variable), produce spores (a survival mechanism), and lack motility (i.e., they don’t move via structures like flagella). We use these properties to rule out B. anthracis. While mass spectrometry is the gold standard for organism identification in modern microbiology, when it comes to potential anthrax, we revert to basic microbiological methods for safety reasons (which we can discuss more in the comments if you're interested).

Bacillus anthracis: What Sets It Apart? Bacillus anthracis, the causative agent of anthrax, is a zoonotic disease, meaning it can be transmitted to humans through the handling or consumption of contaminated animal products. Due to its potential use as a bioweapon, B. anthracis is classified as a Tier I Category A agent by the CDC. Even though infection is rare in the United States, the micro lab remains vigilant in identifying this organism due to its serious implications.

Plasmids and Virulence Factors What makes B. anthracis particularly dangerous are its virulence plasmids, pXO1 and pXO2, which carry the genes responsible for toxin production and capsule formation, respectively. These plasmids play a crucial role in the organism’s ability to cause disease, enabling it to evade the immune system and produce lethal toxins.

But what exactly is a plasmid?

What is a Plasmid? A plasmid is a small, circular piece of DNA that exists independently of the bacterial chromosome. Unlike the bacterial genome, which contains essential genes for the organism’s survival, plasmids often carry genes that provide advantages under certain conditions—such as antibiotic resistance or, in the case of B. anthracis, virulence factors.

Plasmids are particularly interesting biologically and evolutionarily because they can be transferred between bacteria via a process called horizontal gene transfer. This means bacteria can acquire new traits, such as antibiotic resistance or enhanced pathogenicity, from other bacteria without evolving them slowly over generations. In essence, plasmids allow bacteria to adapt quickly to new challenges, making them highly versatile and resilient organisms. From an evolutionary standpoint, plasmids accelerate genetic diversity and adaptability, giving certain bacteria a survival edge in hostile environments.

Think of it this way: plasmids let bacteria "plug and play" abilities. Imagine if I could transfer my height, immune system, or ability to play the ocarina just by touching you... now you're getting it. Because of these abilities plasmids are, in many ways, the cornerstone of modern biomedical tech. We will definitely be talking about them again.

What is Bacillus cereus biovar anthracis and why use it to intro plasmids? Now, why bring up plasmids in this way? Because I can. Stories are nice. Anyway, plasmids are key to understanding another entity: Bacillus cereus biovar anthracis. This variant of B. cereus (the contaminant in our story) has acquired plasmids nearly identical to those found in B. anthracis, meaning it can cause anthrax-like diseases, particularly in animals. While B. cereus is more commonly known for causing food poisoning or being a random contaminant, its biovar anthracis variant is a real concern due to its ability to acquire these plasmids, making it capable of causing serious infections similar to anthrax. Mother nature is getting scarier!

In 2016, this variant was added to the CDC’s select agent list, emphasizing the significance of monitoring its presence, especially in cases involving animals. Though not as common in humans, its existence underscores the evolutionary importance of plasmids in spreading virulence factors across bacterial species.

Conclusion To wrap it up: Plasmids are fascinating, highly relevant to the changing landscape of infectious diseases, and, as will be discussed later, they might even change what it means to be human.

4
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

OK, Time for Something Random: Lyme disease and its strange connection to lizards.

Lyme disease is a big deal. Especially, or at least historically, here on the east coast US. Did you know that some animals can actually cure infected tick carriers!? I didn't either.

In parts of California, western fence lizards play a surprising role in controlling Lyme disease. When ticks feed on these lizards, a protein in their blood kills the bacteria (Borrelia burgdorferi) that causes Lyme disease. Some factor in the lizards blood is ingested and kills the bacteria present in the gut of the tick.

To conclude, nature is wild. Much of our innovation in biology/medicine, including CRISPR (see earlier related posts) and the majority of our antibiotics, aren't so much inventions but are more accurately 'discoveries.'

Relevant papers I happened upon: https://pubmed.ncbi.nlm.nih.gov/9488334/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5413869/

 

Let’s continue our dive into gene therapy with one of my favorite papers. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6876218/

In this study, researchers delivered three longevity-associated genes (FGF21, αKlotho, and sTGFβR2) to mice using a gene therapy cocktail. These genes target metabolism, heart function, and kidney health—three areas that typically decline with age. Here’s why this is a big deal:

Obesity & Diabetes? Reversed. Mice fed a high-fat diet lost weight and saw their diabetes symptoms disappear, just by tweaking how their cells handled energy.

Heart Failure? Improved. The therapy improved heart function by 58%, meaning it could help tackle the leading cause of death worldwide.

Kidney Disease? Protected. Mice treated with this gene therapy avoided the typical kidney damage seen with age-related conditions.

What’s most exciting is that a single gene therapy cocktail—combining just two of the three genes—was able to treat all of these diseases simultaneously. Imagine being able to tackle multiple age-related health issues with just one treatment!

This approach could be a game-changer in how we think about aging and disease. Instead of targeting one condition at a time, we might be able to treat aging itself by addressing the root causes of multiple diseases.

What do you think—are we on the verge of a breakthrough in how we fight age-related diseases?

See this similar paper here targeting TERT and follistatin: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9171804/

Do these papers pass our threshold of believability? Are we concerned that one of these papers had a few post publication amendments? I may circle back to poke holes in them (if I can find any) at a later time. Feel free to beat me to it!

3
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Let’s talk CRISPR-Cas9 and why it’s one of the most significant breakthroughs in modern biology.

At its core, CRISPR-Cas9 is a tool for precise genome editing. Before CRISPR, genetic modification was a slow, expensive, and often imprecise process. CRISPR changed the game by allowing scientists to cut DNA at specific sites, guided by an RNA molecule that can be customized to target nearly any gene. Once the DNA is cut, it can be repaired in a way that adds, deletes, or alters the genetic sequence. This kind of precision has opened up endless possibilities.

Why is this such a big deal?

Speed and Efficiency: CRISPR allows scientists to make changes to the DNA of organisms in weeks, not years. You want to knock out a gene? You can do that. Want to introduce a new one? Done. The speed and flexibility are revolutionary compared to older methods.

Precision: CRISPR can zero in on specific genes with high accuracy, reducing the risk of off-target effects (though this is still an area of research). Precision matters when you’re editing the building blocks of life.

Wide Applications: It’s not just a tool for basic research—CRISPR is shaping medicine, agriculture, and even biotechnology. Scientists are working on curing genetic disorders, creating disease-resistant crops, and engineering cells to fight cancer. The potential here is massive.

How is CRISPR shaping biology today?

Gene Therapy: One of the most exciting applications is in treating genetic diseases like sickle cell anemia, muscular dystrophy, and certain forms of blindness. By directly editing the faulty genes responsible for these conditions, CRISPR could offer permanent cures rather than just treating symptoms.

Cancer Research: CRISPR is being used to edit immune cells, making them better at recognizing and attacking cancer. We’re moving closer to personalized medicine where your immune system can be genetically fine-tuned to fight off specific diseases.

Agriculture: In crops and livestock, CRISPR is being used to enhance yields, create resistance to pests and disease, and improve nutritional content. This could help address food security as populations grow and climates change.

Basic Research: Perhaps one of its most profound impacts is that CRISPR makes it easier to explore how genes work. We’re learning more about gene functions at a faster pace than ever before, and this knowledge feeds into all other areas of biology.

Of course, with great power comes great responsibility. There are ethical considerations around using CRISPR, especially when it comes to editing human embryos or making changes that can be passed down to future generations. The technology is advancing quickly, but society will need to decide how to handle the moral implications.

In summary, CRISPR-Cas9 is a huge deal because it makes genome editing faster, cheaper, and more accurate than ever before. It’s shaping everything from how we fight diseases to how we grow food, and it’s rapidly transforming the future of biology. We’re just starting to scratch the surface of its potential.

Suggested reading:

The paper that started all https://pubmed.ncbi.nlm.nih.gov/22745249/

A look at a future where everyone has access to the power of CRISPR https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11297044/

view more: next ›