Download the Ace AWS DEA-C01 Exam App: iOS - Android
Longevity gene therapy and AI – What is on the horizon?
Gene therapy holds promise for extending human lifespan and enhancing healthspan by targeting genes associated with aging processes. Longevity gene therapy, particularly interventions focusing on genes like TERT (telomerase reverse transcriptase), Klotho, and Myostatin, is at the forefront of experimental research. Companies such as Bioviva, Libella, and Minicircle are pioneering these interventions, albeit with varying degrees of transparency and scientific rigor.
TERT, Klotho, and Myostatin in Longevity
TERT: The TERT gene encodes for an enzyme essential in telomere maintenance, which is linked to cellular aging. Overexpression of TERT in model organisms has shown potential in lengthening telomeres, potentially delaying aging.
Klotho: This gene plays a crucial role in regulating aging and lifespan. Klotho protein has been associated with multiple protective effects against age-related diseases.
Myostatin: Known for its role in regulating muscle growth, inhibiting Myostatin can result in increased muscle mass and strength, which could counteract some age-related physical decline.
The Experimental Nature of Longevity Gene Therapy
The application of gene therapy for longevity remains largely experimental. Most available data come from preclinical studies, primarily in animal models. Human data are scarce, raising questions about efficacy, safety, and potential long-term effects. The ethical implications of these experimental treatments, especially in the absence of robust data, are significant, touching on issues of access, consent, and potential unforeseen consequences.
Companies Offering Longevity Gene Therapy
Bioviva: Notably involved in this field, Bioviva has been vocal about its endeavors in gene therapy for aging. While they have published some data from mouse studies, human data remain limited.
Libella and Minicircle: These companies also offer longevity gene therapies but face similar challenges in providing comprehensive human data to back their claims.
Industry Perspective vs. Public Discourse
The discourse around longevity gene therapy is predominantly shaped by those within the industry, such as Liz Parrish of Bioviva and Bryan Johnson. While their insights are valuable, they may also be biased towards promoting their interventions. The lack of widespread discussion on platforms like Reddit and Twitter, especially from independent sources or those outside the industry, points to a need for greater transparency and peer-reviewed research.
Ethical and Regulatory Considerations
The ethical and regulatory landscape for gene therapy is complex, particularly for treatments aimed at non-disease conditions like aging. The experimental status of longevity gene therapies raises significant ethical questions, particularly around informed consent and the potential long-term impacts. Regulatory bodies are tasked with balancing the potential benefits of such innovative treatments against the risks and ethical concerns, requiring a robust framework for clinical trials and approval processes.
Longevity Gene Therapy and AI
Integrating Artificial Intelligence (AI) into longevity gene therapy represents a groundbreaking intersection of biotechnology and computational sciences. AI and machine learning algorithms are increasingly employed to decipher complex biological data, predict the impacts of genetic modifications, and optimize therapy designs. In the context of longevity gene therapy, AI can analyze vast datasets from genomics, proteomics, and metabolomics to identify new therapeutic targets, understand the intricate mechanisms of aging, and predict individual responses to gene therapies. This computational power enables researchers to simulate the effects of gene editing or modulation before actual clinical application, enhancing the precision and safety of therapies. Furthermore, AI-driven platforms facilitate the personalized tailoring of gene therapy interventions, taking into account the unique genetic makeup of each individual, which is crucial for effective and minimally invasive treatment strategies. The synergy between AI and longevity gene therapy accelerates the pace of discovery and development in this field, promising more rapid translation of research findings into clinical applications that could extend human healthspan and lifespan.
For longevity gene therapy to advance from experimental to accepted medical practice, several key developments are needed:
Robust Human Clinical Trials: Rigorous, peer-reviewed clinical trials involving human participants are essential to establish the safety and efficacy of gene therapies for longevity.
Transparency and Peer Review: Open sharing of data and peer-reviewed publication of results can help build credibility and foster a more informed public discourse.
Ethical and Regulatory Frameworks: Developing clear ethical guidelines and regulatory pathways for these therapies will be crucial in ensuring they are deployed responsibly.
The future of longevity gene therapy is fraught with challenges but also holds immense promise. As the field evolves, a multidisciplinary approach involving scientists, ethicists, regulators, and the public will be crucial in realizing its potential in a responsible and beneficial manner.
Longevity gene therapy and AI: Annex
What are the top 10 most promising potential longevity therapies being researched?
I think the idea of treating aging as a disease that’s treatable and preventable in some ways is a really necessary focus. The OP works with some of the world’s top researchers using HBOT as part of that process to increase oxygen in the blood and open new pathways in the brain to address cognitive decline and increase HealthSpan (vs. just lifespan). Pretty cool stuff!
HBOT in longevity research stands for “hyperbaric oxygen therapy.” It has been the subject of research for its potential effects on healthy aging. Several studies have shown that HBOT can target aging hallmarks, including telomere shortening and senescent cell accumulation, at the cellular level. For example, a prospective trial found that HBOT can significantly modulate the pathophysiology of skin aging in a healthy aging population, indicating effects such as angiogenesis and senescent cell clearance. Additionally, research has demonstrated that HBOT may induce significant senolytic effects, including increasing telomere length and decreasing senescent cell accumulation in aging adults. The potential of HBOT in healthy aging and its implications for longevity are still being explored, and further research is needed to fully understand its effects and potential applications.
2- Are they also looking into HBOT as a treatment for erectile dysfunction?
Definitely! Dr. Shai Efrati has been doing research around that and had a study published in the Journal of Sexual Medicine. Dr. Efrati and his team found that 80% of men “reported improved erections” after HBOT therapy: https://www.nature.com/articles/s41443-018-0023-9
Cellular rejuvenation aka partial reprogramming (as someone else already said) but not just by Yamanaka (OSKM) factors or cocktail variants but also by other novel Yamanaka-factor alternatives.
I see a lot of people saying reprogramming, and I think the idea is promising but as someone who worked on reprogramming cells in vitro I can tell you that any proof of concepts in vivo large animal models is far aways.
7- I think plasmapheresis is a technology most likely to be proven beneficial in the near term and also a technology that can be scaled and offered for reasonable prices.
8- Bioelectricity, if we succeed in interpreting the code of electrical signals By which cells communicate , we can control any tissue growth and development including organs regeneration
9- Gene therapy and reprogramming will blow the lid off the maximum lifespan. Turning longevity genes on/expressing proteins that repair cellular damage and reversing epigenetic changes that occur with aging.
10- I don’t think anything currently being researched (that we know of) has the potential to take us to immortality. That’ll likely end up requiring some pretty sophisticated nanotechnology. However, the important part isn’t getting to immortality, but getting to LEV. In that respect, I’d say senolytics and stem cell treatments are both looking pretty promising. (And can likely achieve more in combination than on their own.)
11- Spiroligomers to remove glucosepane from the ECM.
12- Yuvan Research. Look up the recent paper they have with Steve Horvath on porcine plasma fractions.
13- This OP thinks most of the therapies being researched will end up having insignificant effects. The only thing that looks promising to me is new tissue grown from injected stem cells or outright organ replacement. Nothing else will address DNA damage, which results in gene loss, disregulation of gene expression, and loss of suppression of transposable elements.
Altos Labs is a biotechnology research company focused on unraveling the deep biology of cell rejuvenation to reverse disease and develop life extension therapies that can halt or reverse the human aging process. The company’s goal is to increase the “healthspan” of humans, with longevity extension being an “accidental consequence” of their work. Altos Labs is dedicated to restoring cell health and resilience through cell rejuvenation to reverse disease, injury, and disabilities that can occur throughout life. The company is working on specialized cell therapies based on induced pluripotent stem cells to achieve these objectives. Altos Labs is known for its atypical focus on basic research without immediate prospects of a commercially viable product, and it has attracted significant investment, including a $3 billion funding round in January 2022. The company’s research is based on the fundamental biology of cell rejuvenation, aiming to understand and harness the ability of cells to resist stressors that give rise to disease, particularly in the context of aging.
16– not so much a “therapy”but I think research into growing human organs may be very promising long term. Being able to get organ transplants made from your own cells means zero rejection issues and no limitations of supply for transplants. Near term drugs like rampamycin show good potential for slowing the aging process and are in human trials.
What is biological reprogramming technology?
Biological reprogramming technology involves the process of converting specialized cells into a pluripotent state, which can then be directed to become a different cell type. This technology has significant implications for regenerative medicine, disease modeling, and drug discovery. It is based on the concept that a cell’s identity is defined by the gene regulatory networks that are active in the cell, and these networks can be controlled by transcription factors. Reprogramming can be achieved through various methods, including the introduction of exogenous factors such as transcription factors. The process of reprogramming involves the erasure and remodeling of epigenetic marks, such as DNA methylation, to reset the cell’s epigenetic memory, allowing it to be directed to different cell fates. This technology has the potential to create new cells for regenerative medicine and to provide insights into the fundamental basis of cell identity and disease.
See also
Gene Therapy Basics for foundational understanding of gene therapy techniques and applications.
Hi, I recently posted in here regarding my latest project utilizing ai. Essentially, you can upload any type of advertisement (display, billboard, mobile billboard, LED, etc) and you will receive a score out of 100% in 3 categories (recall, contrast, and copy engagement). It will then give you specific feedback and suggestions on how to improve the ad based on today’s best marketing practices in order to increase conversion. We currently have a trial version that will end on Oct. 1st and I would love to get as many of you to try and give feedback. It’s pretty fun to mess around with and upload your ads or even other companies ads to see how you compare. Looking forward to hearing your thoughts and I will leave url in comments 🙂 submitted by /u/germanshepherd77 [link] [comments]
Ok, so this time I am hoping to convey what my vision was originally for this application that was misunderstood in the first post. So for months I have been working on trying to create an integration of an LLM with a journal so that you can get feedback on written work. That work can be anything including coding projects. I wanted to be able to get the same kind of insight you would get when you have people from various perspectives other than your own comment on content. That is what I used to like about Reddit. Now it is just these trolls that follow my account that are just mean. The main thing about the development that I thought would be helpful to other developers is the stack I used to host everything entirely for free. Instead of using OpenAI or Anthropic I use a local instance of Llama3.1. Instead of paying for hosting I use a static site generator like Jekyll to host for free on Netlify with easy content upgrades through pushing markdown files. I wanted the functionality of a database but there are all of these great solutions now to having to pay for hosting that I have recently discovered and this was the one combination that worked and that was actually free. I still have a lot of work to do on the app. There is so much that I want to do but I have to teach myself. Thankfully I have access to LLMs to teach me how to do everything. I have learned how to test what they output a lot faster which really helps speed up the process. The main thing slowing me down now is the processor on my local machine I am running the LLM on to generate comments. The next main thing I want to work on is developing the personas to be much more detailed and editable or more functional. People did not like the fact that I used an LLM to write my last Reddit post so I am making sure to limit LLM content to the procedural parts of the guide that I used to help with deploying and troubleshooting the installation script. So you will have to deal with my abrasive personality and tendency to narcissistically wander off topic which I usually used and LLM to spare people reading from having to endure, my prompts are often much longer than the generated content, but y'all whined and complained about using an LLM saying that I don't know how to think nor write or such but that doesn't matter much to me because I am just doing this to annoy you now. So here is the guide on how to build this app I thought was interesting in that it is entirely free to deploy and that it is a quick way to create a new website hosted for free. Before we start, make sure you have the following installed: Ruby brew update brew upgrade rbenv ruby-build rbenv install 3.3.5 --force rbenv global 3.3.5 sudo chown -R $(whoami) ~/.rbenv Jekyll Git Ollama (for AI-generated comments) sudo apt-get install nodejs npm Install Netlify CLI npm install netlify-cli -g Step 1: Initial Setup First, let's create a new Jekyll site and set up the basic structure: jekyll new insight-journal cd insight-journal git init git add . git commit -m "Initial commit" Step 2: Configure for Netlify CMS Now, let's add Netlify CMS to our Jekyll site: Create an admin folder in your project root. Add config.yml and index.html files in the admin folder. config.yml: backend: name: git-gateway branch: main media_folder: "assets/images" public_folder: "/assets/images" collections: - name: "journal" label: "Journal Entries" folder: "_posts" create: true slug: "{{year}}-{{month}}-{{day}}-{{slug}}" editor: preview: false fields: {label: "Layout", name: "layout", widget: "hidden", default: "post"} {label: "Title", name: "title", widget: "string"} {label: "Publish Date", name: "date", widget: "datetime"} {label: "Categories", name: "categories", widget: "list", required: false} {label: "Tags", name: "tags", widget: "list", required: false} {label: "Body", name: "body", widget: "markdown"} admin/index.html : <!doctype html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Content Manager</title> </head> <body> <script src="https://unpkg.com/netlify-cms@\^2.0.0/dist/netlify-cms.js"></script> </body> </html> Update your _config.yml to include Netlify CMS settings. Insight Journal settings title: Insight Journal description: A journal for insights and reflections author: Your Name Netlify CMS include: - admin Build settings markdown: kramdown theme: minima plugins: - jekyll-feed - jekyll-admin Exclude from processing exclude: - Gemfile - Gemfile.lock - node_modules - vendor/bundle/ - vendor/cache/ - vendor/gems/ - vendor/ruby/ Step 3: Customize Your Journal Let's add some custom layouts and pages: Create a _layouts folder and add a post.html layout for journal entries. post.html layout: default <article class="post h-entry" itemscope itemtype="http://schema.org/BlogPosting"> <header class="post-header"> <h1 class="post-title p-name" itemprop="name headline">{{ page.title | escape }}</h1> <p class="post-meta"> <time class="dt-published" datetime="{{ page.date | date\_to\_xmlschema }}" itemprop="datePublished"> {%- assign date_format = site.minima.date_format | default: "%b %-d, %Y" -%} {{ page.date | date: date_format }} </time> {%- if page.author -%} • <span itemprop="author" itemscope itemtype="http://schema.org/Person"> <span class="p-author h-card" itemprop="name">{{ page.author }}</span>< /span> {%- endif -%} </p> </header> <div class="post-content e-content" itemprop="articleBody"> {{ content }} </div> {%- if site.disqus.shortname -%} {%- include disqus_comments.html -%} {%- endif -%} <a class="u-url" href="{{ page.url | relative\_url }}" hidden></a> </article> Update the index.md file to display your journal entries. layout: home Welcome to Insight Journal This is where you can view all your journal entries. Click on an entry to read more. <ul> {% for post in site.posts %} <li> <h2><a href="{{ post.url }}">{{ post.title }}</a></h2> <p>{{ post.date | date: "%B %d, %Y" }}</p> <p>{{ post.excerpt }}</p> </li> {% endfor %} </ul> Step 4: Set Up AI-Generated Comments This is where it gets interesting! We'll add a Python script to generate comments using Ollama: Create generate_comments.py and personas.py in your project root. generate_comments.py import os import random import requests import frontmatter from personas import PERSONAS def generate_comment(post_content, persona): url = "http://localhost:11434/api/generate" prompt = f"As a {persona['name']} ({persona['description']}), comment on this post:\n\n{post_content}" data = { "model": "llama3.1", "prompt": prompt, "stream": False } response = requests.post(url, json=data) return response.json()["response"] def generate_comments_for_post(post_content, num_comments=3): if not post_content.strip(): raise ValueError("Post content is empty") selected_personas = random.sample(PERSONAS, num_comments) return [{ "persona": p['name'], "comment": generate_comment(post_content, p) } for p in selected_personas] def get_posts(posts_dir): posts = [] for filename in os.listdir(posts_dir): if filename.endswith('.md'): posts.append(filename) return posts def select_post(posts): print("Available posts:") for i, post in enumerate(posts): print(f"{i + 1}. {post}") selection = int(input("Enter the number of the post you want to generate comments for: ")) - 1 return posts[selection] def append_comments_to_post(post_path, comments): with open(post_path, 'r', encoding='utf-8') as file: content = file.read() # Find the end of the frontmatter frontmatter_end = content.find('---', content.find('---') + 3) + 3 # Split the content into frontmatter and body frontmatter = content[:frontmatter_end] body = content[frontmatter_end:].strip() # Append comments comments_section = "\n\n## Comments\n" for comment in comments: comments_section += f"\n### {comment['persona']}\n{comment['comment']}\n" # Combine everything new_content = frontmatter + '\n' + body + comments_section # Write the new content back to the file with open(post_path, 'w', encoding='utf-8') as file: file.write(new_content) def main(): posts_dir = '_posts' # Update this to your Jekyll posts directory try: posts = get_posts(posts_dir) if not posts: print(f"No .md files found in {posts_dir}") return selected_post = select_post(posts) post_path = os.path.join(posts_dir, selected_post) print(f"Reading file: {post_path}") with open(post_path, 'r', encoding='utf-8') as file: raw_content = file.read() print(f"Raw file content (first 500 characters):\n{raw_content[:500]}") post = frontmatter.loads(raw_content) if not post.content.strip(): print(f"The content of '{selected_post}' is empty after parsing frontmatter.") print("Frontmatter:", post.metadata) return print(f"Post content (first 500 characters):\n{post.content[:500]}") comments = generate_comments_for_post(post.content) print("Generated comments:") for comment in comments: print(f"{comment['persona']}: {comment['comment'][:100]}...") # Print first 100 chars of each comment append_comments_to_post(post_path, comments) print(f"Comments have been added to {selected_post}") except Exception as e: print(f"An error occurred: {str(e)}") import traceback traceback.print_exc() if __name__ == "__main__": main() personas.py PERSONAS = [ { "name": "Critical Thinker", "description": "Analytical and skeptical, always questioning assumptions." }, { "name": "Empathetic Listener", "description": "Focuses on emotional aspects and personal experiences." }, { "name": "Devil's Advocate", "description": "Presents counterarguments to challenge ideas." }, { "name": "Optimistic Visionary", "description": "Sees potential and positive outcomes in every situation." }, { "name": "Pragmatic Planner", "description": "Focuses on practical implications and next steps." } Install required Python packages: pip install requests python-frontmatter Step 5: Writing and Commenting on Posts Now you're all set to start journaling: Write your posts as Markdown files in the _posts directory. To add AI-generated comments, run python3 generate_comments.py and select your post. Step 6: Local Development and Preview To work on your journal locally: Run bundle install to install dependencies. Use bundle exec jekyll serve to preview your site locally. Step 7: Deploy to Netlify Finally, let's get your journal online: Push your repository to GitHub. Connect your GitHub repo to Netlify. Configure Netlify to use the main branch for deployment. Set up Netlify Identity for authentication. You can write entries directly in your favorite text editor, push them to GitHub, and even add AI-generated comments for some interesting perspectives on your thoughts. Ok so that is the application I have been trying to make using a locally hosted LLM to generate insights to journal entries. Next I want to add more features such as formatting and styling, etc. This guide was written with the help of LLMs, but I also went through it all and tested it and then pieced together this guide so that it includes only a guide that works. If it does not work for you, then you should try to debug it with an LLM, that is what I do and it helps me a lot to find the issue. People dog on LLMs all the time but they are just so helpful. You still have to learn and do the thinking a lot of the time and develop strategies to condense your codebase. I write these guides or posts to social media just to create ways to encode the codebase in a way that I can feed to an LLM to help me further develop a project or debug issues. TLDR: So the idea behind this journal was that I was tired of sharing stuff to reddit and getting rude comments so I made a journaling blog that generates comments and insights from a locally hosted LLM all hosted for free. Mostly I just wanted to share what I learned and how to develop and deploy static serverless projects for free. I tried my best to document this in a way that it will work. I tested it and it worked for me but I might have forgotten to include something, thus the importance of learning to use LLMs to debug projects, or at least that is what has helped me the most. Anyway, let me know if any of this is helpful. I think it might be a good quickstart to developing with a local LLM with Ollama. submitted by /u/KonradFreeman [link] [comments]
I have only had a few days' experience messing about with the software, and I have no technical knowledge about what I'm doing. I'm just trying out different settings and judging with my ears what sounds best. Currently I just load up a song in Ensemble Mode, select everything to get an idea of what each individual model does to a given song. I often ignore the "final" ensemble output at this stage because I don't expect it to be its best since there's no thought put into the combination of models. I'll then rerun Ensemble Mode, but this time only the models that I deem to give good results. I think doing it that way for two reasons: (1) One combination of models that works for one specific song, or even a specific genre, might not work for another. One moment I get clean results with rock 'n roll, then the next I get garbage with acoustic ballads. (2) I'd like to get instrumental tracks in two distinct styles: one with the backing vocals appropriate for personal karaoke covers, and true instrumentals, where there are no human voices to be heard in the final output at all. The group of models I need for these two jobs are not the same. I plan to do this for my favourite artist's full discography comprising hundreds of individual tracks, hence why I'd like a more efficient way to do things. The only thing I can think of to maximize my time is to open up the individual models' outputs altogether in one instance of Audacity so that I can properly compare and contrast without just switching songs in a music player app. submitted by /u/ignoremesenpie [link] [comments]
Sharing an easily digestible and smaller version of the main updates of the past week in the world of AI. Meet Pixtral 12B - Mistral's new Multimodal AI Model- Mistral has released a new AI model that can process images and texts. The model might transform tasks like image captioning and object counting. OpenAI debuts O-1 models boasting PhD-level performance- o1-preview and o1-mini can reason through complex tasks, solving problems by dedicating more time to thinking and refining, similar to PhD students. Google AI Studio launches model comparison mode- The feature lets users easily view how different AI models and parameters affect outputs and evaluate model progress and speed differences. Meta Connect 2024: What to expect?- The two-day event will feature the latest Meta AI, including its Orion AR glasses, a Quest-3 variant, and, reportedly, RayBan smart glasses. Luma drops Dream Machine API just hours after Runway- AI wars escalate as Luma’s API lets users quickly build applications and services on Dream Machine following Runway’s API launch. OpenAI’s new ‘Safety Board’ can halt model releases- The independent Board oversight Committee will oversee major model releases, exercise oversight over launches, and delay launches over safety concerns. And there was more… Google’s new robots - Aloha Unleashed and DemoStart, demonstrated impressive dexterity, performing tasks like tying a shoelace, hanging a shirt, and cleaning a kitchen. Salesforce released Agentforce, a suite of low-code tools to build autonomous AI agents that can perform reasoning for sales, marketing, and commerce-related tasks. Hume AI introduces EVI-2, a voice-to-voice model that can emulate diverse personalities, accents, speaking styles, and multiple speaking rates. Adobe Firefly will receive video-generation features like Generative Extend, Text to Video, and Image to Video by the end of 2024. Amazon has started experimenting with ads in its Rufus chatbot. Based on Amazon search and conversational context, the ads may soon appear for users in the US. A new UK study led by a team of firefighters, engineers, and scientists reveals that AI-piloted drones may be able to prevent wildfires by spotting and putting out flames. SambaNova Systems has offered a high-speed, open-source alternative to OpenAI’s o-1 model, boasting fast processing capabilities via a Hugging Face demo. Microsoft launches new Copilot features like Python support in Excel, a narrative builder in PowerPoint, text summaries for Teams, and improvements in Word and OneDrive. Slack unveils AI-powered note-taking tool capable of summarizing meetings with AI or Google Meet. AI startup World Labs plans to build AI models with spatial intelligence designed to generate, perceive, and interact with 3D environments and navigate physical spaces. More detailed breakdown of these news and innovations in the newsletter. submitted by /u/RohitAkki [link] [comments]
Hi, May I ask what AI voice person is this? I tried ElevenLabs detection tool but it says it is not from them. Any ideas? https://www.youtube.com/shorts/KXvobbCxeRQ Thanks submitted by /u/bentraje [link] [comments]
Title. In my job I get a lot of unstructured e-mail and other documents from clients. I would like a tool that converts them to PDF and sort them by date in a folder automaticly. Is such a tool available? submitted by /u/eliasknutsen [link] [comments]
OpenAI has acknowledged that their newest AI models have significantly raised the risk of artificial intelligence being used to create biological weapons. Article :https://medium.com/@sadozye86/openai-admits-their-new-models-raise-the-risk-of-being-exploited-to-create-biological-weapons-b3ef7225f5ea?sk=v2%2Ff6351073-796b-4385-b1d3-db47467cb3f9 submitted by /u/zain017 [link] [comments]
I've been creating some really interesting sketches in MidJourney and would like to use the output from MJ in other applications. Is there a way to produce vector based images with transparent background? Happy to use other AI image generators, just don't know who to achieve this perplexing problem! submitted by /u/TonightIsNotForSale [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.