#FoundationModels

7 posts loaded — scroll for more

Text
topscientists
topscientists


Exploring promptable foundation models for SAR imagery, this study adapts advanced segmentation intelligence to detect snow avalanches with precision. ❄️📡 By leveraging transferable vision representations, it enhances disaster mapping, rapid response, and terrain analytics, unlocking scalable, data-efficient geospatial insights for resilient mountain ecosystems worldwide climate risk monitoring and predictive hazard mitigation.

World Top Scientists Awards

Visit Our Website 🌐: worldtopscientists.com

Nominate Now📝: https://worldtopscientists.com/award-nomination/?ecategory=Awards&rcategory=Awardee

Contact us ✉️: support@worldtopscientists.com

Text
segmed
segmed

From Chaos to Clarity - Segmed and Microsoft shared How Healthcare AI is Evolving

✅ From Traditional AI to Foundation Models
Traditional AI: narrow, task-specific, heavy on supervised training
Foundation Models: broad, multi-modal, adaptive with minimal fine-tuning

✅ Challenges in Accessing Medical Imaging Data
Data silos and lack of standardization
Complex de-identification processes
Large file sizes and no easy cohort-building tools
Bias from limited, non-diverse datasets

✅ What’s Needed for Foundation Models
Multi-modal datasets: images, reports, clinical data
Large-scale non-specific data for pre-training + smaller specific sets for tuning

✅ Segmed’s Solution
Secure de-identification
Searchability and cohort building
Standardization across providers
Expanded diversity: USA 50 states + 10 countries

✅ The Future
Lower barriers to entry for AI development
Broad, multi-modal models integrating radiology, pathology, and clinical history
Democratizing healthcare innovation
Foundation models aren’t just the future—they’re the catalyst for scalable, equitable healthcare AI.

A big thank you to Ivan Tarapov, Sr. Director of Product Management for Multimodal Healthcare AI at Microsoft, for sharing his insights and vision during this session!

For more information visit: https://www.segmed.ai/resources/events

Text
futureofgreen
futureofgreen

GPU and Energy Costs Could Force a Slowdown in AI Deployments Soon

Market Corrections and Ecosystem Renewal: How the AI Cycle Mirrors Natural Forces

When forests burn, life doesn’t end—it transforms. The same principle governing natural ecosystems now drives technology markets. The AI infrastructure boom, inflated by speculative investment and irrational exuberance, faces an inevitable reckoning that will reshape the entire industry. This correction isn’t a…

Text
segmed
segmed

🚀 Segmed at RSNA 2025: Unlocking the Power of Real-World Imaging Data 🚀

We’re excited to share that Segmed is heading to RSNA 2025, where we’ll spotlight how Real-World Imaging Data (RWiD) is transforming every stage of the AI lifecycle — from Foundation Model development all the way to regulatory submission readiness.

This year, we’re proud to be featured in a AI Theather Session with Microsoft, diving into how high-quality, tokenized, de-identified imaging data accelerates model performance, safety, and clinical reliability.

🔗 Event details: https://www.segmed.ai/events/segmed-rsna-2025

🌐 Why RWiD Matters More Than Ever
Healthcare Foundation Models (HFMs) are reshaping medical AI — but real-world deployment requires more than generalized training. Clinical imaging varies dramatically by modality, equipment, demographic factors, and context. Without robust fine-tuning, even strong HFMs risk:

⚠️ Missing subtle diagnoses
⚠️ Failing to generalize across populations
⚠️ Stalling on the path to clinical validation and regulatory clearance

At Segmed, we bridge that gap with the industry’s most diverse, global imaging dataset:
📊 100M imaging studies
🏥 2,000+ healthcare sites
🌎 Across 5 continents
🔐 Fully tokenized, de-identified, regulatory-ready

Fine-tuning with Segmed’s RWiD enables:
✅ Higher diagnostic accuracy
✅ Diverse patient cohorts
✅ Stronger safety and performance evidence for regulatory submissions
✅ Real-world reliability that gets AI from the lab into the industry

Dive deeper into how RWiD elevates HFM development in our latest blog post:
👉 https://www.segmed.ai/resources/blog/fine-tuning-healthcare-foundation-models-with-rwid

If you’re building the next generation of medical AI, let’s connect at RSNA.

Segmed is here to streamline your journey from Foundation Models to regulatory-grade evidence — powered by Real-World Imaging Data.

Text
govindhtech
govindhtech

Amazon Bedrock now offers AI21 Studio’s Jamba-Instruct

AI21 Studio’s Labs in Amazon Bedrock

Create dependable generative AI applications by utilising AI21 Labs foundation models.

AI21 Labs AWS Advantages

Designed with the enterprise in mind

Use specially designed models to power text generation, long document summarising, and question answering to solve essential business activities.

Selection of model dimensions

Choose between the Jurassic-2 Mid, Jurassic-2 Ultra, and Jamba-Instruct models according to the requirements for context length, complexity, and other factors.

Devoted assistance

With the professional advice of AI21 account executives and solution architects, move from prototype to production.

Become acquainted with AI21 Labs

AI21 Labs focuses on creating cutting-edge FMs and AI Systems that let businesses use generative AI in their operations. With the help of their models, AI21 Labs hopes to enable organisations to develop AI solutions that promote confident decision-making, unbridled creativity, and clear communication all of which are necessary for businesses to prosper in the artificial intelligence era.

Use cases

Banking operations

Produce term sheets that are already formatted and prepared for sharing with stakeholders, as well as summarise the most important information from lengthy and complex papers, such as market analyses and corporate reports.

Retails

Produce at scale product descriptions and marketing content that is optimised for conversions, all while adhering to the tone, length, and style of your brand.

Client assistance

Give clients prompt, intelligible answers to their questions based on information sheets, policies, and papers.

Information handling

Increase productivity by facilitating teams’ ability to quickly and easily extract well-reasoned replies using natural language from intricate documentation or policies.

Versions of the models

Jamba-Instruction

AI21’s Jamba model, a hybrid SSM-Transformer, has been fine-tuned for optimal performance and quality, making Jamba-Instruct a dependable commercial option.

Maximum tokens: 256K

Languages Spoken: English

Instruction following, text production, document summary, and question responding are supported use cases.

No fine-tuning is supported.

AI21 Jurassic-2 ultra

Jurassic-2 Ultra

The most potent model available for AI21 for difficult text production jobs requiring the best possible results.
Maximum number of tokens: 8,192

Languages spoken: Dutch, English, Spanish, French, German, Portuguese, and Italian

Supported use cases include: answering questions, summarising, creating drafts, extracting complex material, and coming up with ideas for jobs using deductive reasoning.

No fine-tuning is supported.

Jurassic-2 Mid

The mid-sized model from AI21 is used for sophisticated text creation tasks that demand both price and quality.
Maximum number of tokens: 8,192

Languages Spoken: English

Supported use cases include: answering questions, summarising, creating drafts, extracting complex material, and coming up with ideas for jobs using deductive reasoning.

No fine-tuning is supported.

Introducing Task-Specific and Jurassic-2 APIs

AI21 labs Jurassic

Announcing the release of Jurassic-2, the most recent iteration of AI21 Studio’s foundation models, which will revolutionise the area of artificial intelligence with its superior quality and additional features. Not only that, but AI21 Studio is also making available their task-specific APIs, which have superior plug-and-play reading and writing capabilities than those of AI21 Studio rivals.

The second generation of AI21 Studio foundation models, known as Jurassic-2 (or J2), has several new features and notable quality enhancements, such as zero-shot instruction-following, lower latency, and multi-language compatibility.

Developers may access industry-leading APIs for specialised reading and writing operations right out of the box with task-specific APIs.

For a closer look at each, continue reading.

AI21 Labs Jurassic 2

Jurassic-2

AI21 Studio is pleased to introduce their brand-new, cutting-edge Large Language Model family. Not only is J2 a complete upgrade over their previous generation models, Jurassic-1, but it also comes with additional features and capabilities that set it apart from the competition.

The Jurassic-2 family consists of instruction-tuned language models for Jumbo and Grande in addition to three different sized basic language models: Large, Grande, and Jumbo.Image credit to AWS

On Stanford’s Holistic Evaluation of Language Models (HELM), the industry standard for language models, Jurassic is already causing a stir. AI21 Labs evaluated J2 Jumbo using HELM’s official repository, and it currently ranks second (and is still rising). Not to mention, their Grande mid-sized model outperforms versions up to 30 times larger in size, allowing users to maximise production costs and speed without compromising quality.

In contrast to Jurassic-1, what’s new?

Higher calibre

Utilising state-of-the-art pre-training techniques and the most recent data (as of mid-2022), J2’s Jumbo model has achieved an 86.8% win-rate on HELM according to their internal assessments, firmly establishing it as a premier choice in the LLM arena.

Assign capabilities

The zero-shot instruction features of J2’s best-in-class models enable them to be guided using natural language without the need for examples. These features have been added to J2’s Grande and Jumbo models. As an illustration, consider this:

Multilingual Assistance

J2 is compatible with a number of non-English languages, such as Dutch, Spanish, French, German, Portuguese, and Italian.

Achievement

J2’s models can outperform AI21 Studio earlier models by up to 30% in terms of latency.

You may now access every Jurassic-2 model on AI21 Studio playground and API. Here are some pointers and strategies for utilising the new Instruct models to get you going.

‍Task-Despecificated APIs

With the release of the Wordtune API set, AI21 Labs is also pleased to announce the debut of AI21 Studio’s new line of Task-Specific APIs, which will allow developers to access the language models behind AI21 Studio wildly successful consumer-facing reading and writing apps.

What makes task-specific APIs necessary?

AI21 Studio’s General Large Language Models are extremely strong and have been successfully customised by many of their clients to power their applications. But AI21 Studio have also seen that a lot of users have a lot of recurring use cases.

AI21 Studio’s ready-made, best-in-class language processing solutions enable developers to bypass many of the necessary model training and fine-tuning phases by giving them access to task-specific APIs.

Cutting-edge AI is used by Wordtune and Wordtune Read to help users with writing and reading chores, all while saving time and enhancing efficiency. AI21 Studio is making the AI engine underlying this range of award-winning apps available to developers with the introduction of Wordtune API, enabling them to fully utilise Wordtune’s features and incorporate them into their own apps:

  • Reword texts to suit any length, tone, or meaning by using paraphrasing.
  • Condense long texts into digestible, bite-sized chunks by summarising them.
  • Grammar and typo correction in real time is possible with Grammatical Error Correction (GEC).
  • Text Improvements: Learn how to make your writing more clear, more fluid, and more vocabulary-rich.
  • Text Segmentation: Divide lengthy texts into paragraphs that are each focused on a different subject.

Read more on Govindhtech.com

Text
govindhtech
govindhtech

Amazon Bedrock Studio: Accelerate generative AI development

AWS is pleased to present to the public today Amazon Bedrock Studio, a brand-new web-based generative artificial intelligence (generative AI) development environment. By offering a fast prototyping environment with essential Amazon Bedrock technologies like Knowledge Bases, Agents, and Guardrails, Amazon Bedrock Studio speeds up the creation of generative AI applications.

Summary

A brand-new SSO-enabled web interface called Amazon Bedrock Studio offers developers from all over an organisation the simplest way to work together on projects, experiment with large language models (LLMs) and other foundation models (FMs), and refine generative AI applications. It simplifies access to various Foundation Models (FMs) and developer tools in Bedrock and provides a fast prototyping environment. AWS administrators can set up one or more workspaces for their company in the AWS Management Console for Bedrock and allow individuals or groups to utilise the workspace in order to enable Bedrock Studio.

In only a few minutes, begin developing applications

Using their company credentials (SSO), developers at your firm can easily log in to the Amazon Bedrock Studio online experience and begin experimenting with Bedrock FMs and application development tools right away. Bedrock Studio provides developers with a safe haven away from the AWS Management Console in which to utilise Bedrock features like Knowledge Bases, Amazon Guardrails, and Agents.

Create flexible generative AI applications

With Amazon Bedrock Studio, developers can gradually improve the accuracy and relevance of their generative AI applications. To acquire more accurate responses from their app, developers can begin by choosing an FM that is appropriate for their use case and then iteratively enhance the prompts. Then, they can add APIs to obtain the most recent results and use their own data to ground the app to receive more pertinent responses. Bedrock Studio streamlines and reduces the complexity of app development by automatically deploying pertinent AWS services (such Knowledge Bases and Agents). Additionally, enterprise use cases benefit from a secure environment because data and apps are never removed from the assigned AWS account.

Work together on projects with ease

Teams may brainstorm, test, and improve their generative AI applications together in Amazon Bedrock Studio‘s collaborative development environment. In addition to creating projects and inviting peers, developers may also share apps and insights and receive immediate feedback on their prototypes. Access control is a feature of Bedrock Studio projects that guarantees that only members with permission can use the apps and resources inside of a project.

Encourage creativity without worrying about infrastructure management

Knowledge bases, agents, and guardrails are examples of managed resources that are automatically installed in an AWS account when developers construct applications in Amazon Bedrock Studio. Because these Bedrock resources are always available and scaleable as needed, they don’t need to worry about the underlying compute and storage infrastructure. Furthermore, the Bedrock API makes it simple to access these resources. This means that by utilising the Bedrock API, you can easily combine the generative AI apps created in Bedrock Studio with their workflows and processes.

Take precautions to ensure the finest answers

To make sure their programme doesn’t provide incorrect output, developers can install content filters and create guardrails for both user input and model replies. To acquire the desired results from their apps, they can add prohibited topics and configure filtering levels across different categories to customise the behaviour of Guardrail.

As a developer, you can now log into Bedrock Studio and begin experimenting with your company’s single sign-on credentials. Within Bedrock Studio, you may create apps with a variety of high-performing models, assess them, and distribute your generative AI creations. You can enhance a model’s replies by following the stages that the user interface walks you through. You can play around with the model’s settings, set limits, and safely integrate tools, APIs, and data sources used by your business. Working in teams, you can brainstorm, test, and improve your generative AI apps without needing access to the AWS Management Console or sophisticated machine learning (ML) knowledge.

You can be sure that developers will only be able to utilise the functionality offered by Bedrock Studio and won’t have wider access to AWS infrastructure and services as an Amazon Web Services (AWS) administrator.

Let me now walk you through the process of installing Amazon Bedrock Studio.

Use Amazon Bedrock Studio to get started

You must first create an Amazon Bedrock Studio workspace as an AWS administrator, after which you must choose and add the users you wish to grant access to the workspace. You can provide the relevant individuals with the workspace URL once it has been built. Users with the necessary permissions can start developing generative AI apps, create projects inside their workspace, and log in using single sign-on.

Establish a workstation in Amazon Bedrock Studio

Select Bedrock Studio from the bottom left pane of the Amazon Bedrock dashboard.

You must use the AWS IAM Identity Centre to set up and secure the single sign-on integration with your identity provider (IdP) before you can create a workspace. See the AWS IAM Identity Centre User Guide for comprehensive instructions on configuring other IdPs, such as Okta, Microsoft Entra ID, and AWS Directory Service for Microsoft Active Directory. You set up user access using the IAM Identity Centre default directory for this demo.

Next, select Create workspace, fill in the specifics of your workspace, and create any AWS Identity and Access Management (IAM) roles that are needed.

Additionally, you have the option to choose the workspace’s embedding models and default generative AI models. Select Create once you’re finished.

Choose the newly formed workspace next.

Next, pick the users you wish to grant access to this workspace by choosing User management and then Add users or groups.

You can now copy the Bedrock Studio URL and share it with your users from the Overview tab.

Create apps for generative AI using Amazon Bedrock Studio

Now that the Bedrock Studio URL has been provided, builders can access it and log in using their single sign-on login credentials. Here at Amazon Bedrock Studio, welcome! Allow me to demonstrate how to select among top-tier FMs, import your own data, use functions to call APIs, and use guardrails to secure your apps.

Select from a number of FMs that lead the industry

By selecting examine, you may begin choosing from among the FMs that are offered and use natural language prompts to examine the models.

If you select Build, you may begin developing generative AI applications in playground mode, play around with model settings, refine your application’s behaviour through iterative system prompts, and create new feature prototypes.

Bring your personal data

Using Bedrock Studio, you can choose from a knowledge base built in Amazon Bedrock or securely bring your own data to customise your application by supplying a single file.

Make API calls using functions to increase the relevancy of model responses

When replying to a prompt, the FM can dynamically access and incorporate external data or capabilities by using a function. The model uses an OpenAPI schema you supply to decide which function it needs to call.

A model can include data into its response through functions that it is not directly aware of or has access to beforehand. For instance, even though the model doesn’t save the current weather information, a function may enable the model to acquire it and incorporate it into its response.

Using Guardrails for Amazon Bedrock, secure your apps

By putting in place safeguards tailored to your use cases and responsible AI rules, you may build boundaries to encourage safe interactions between users and your generative AI apps.

The relevant managed resources, including knowledge bases, agents, and guardrails, are automatically deployed in your AWS account when you construct apps in Amazon Bedrock Studio. To access those resources in downstream applications, use the Amazon Bedrock API.

Amazon Bedrock Studio availability

The public preview of Amazon Bedrock Studio is now accessible in the AWS Regions US West (Oregon) and US East (Northern Virginia).

Read more on govindhtech.com

Text
govindhtech
govindhtech

Innovations in Generative AI and Foundation Models

Generative AI in Therapeutic Antibody Development: With the cooperation announced today, Boehringer Ingelheim and IBM will be able to employ IBM’s foundation model technology to find new candidate antibodies for the development of effective treatments.

Boehringer Ingelheim’s Andrew Nixon, Global Head of Biotherapeutics Discovery, said, “We are very excited to collaborate with the research team at IBM, who share our vision of making in silico biologic drug discovery a reality.” “We will create an unparalleled platform for expedited antibody discovery by collaborating with IBM scientists, and I am sure that this will allow Boehringer to create and provide novel treatments for patients with significant unmet needs.”

Boehringer plans to use a pre-trained AI model created by IBM, which will be further refined using additional proprietary data owned by Boehringer. Vice President of Accelerated Discovery at IBM Research Alessandro Curioni stated, “IBM has been at the forefront of creating generative AI models that extend AI’s impact beyond the domain of language.” “We are excited to now enable Boehringer, a pioneer in the creation and production of antibody-based treatments, to leverage IBM’s multimodal foundation model technologies to help quicken Boehringer’s ability to develop new therapeutics.”

Foundational models for the finding of antibodies

Therapeutic antibodies play a key role in the management of numerous illnesses, such as infectious, autoimmune, and cancerous conditions. The identification and creation of therapeutic antibodies encompassing a variety of epitopes continues to be an extremely difficult and time-consuming procedure, even with significant technological advancements.

Researchers from IBM and Boehringer will work together to use in-silico techniques to speed up the antibody discovery process.  New human antibody sequences will be generated in silico using the sequence, structure, and molecular profile data of disease-relevant targets as well as success criteria for therapeutically relevant antibody molecules, such as developability, specificity, and affinity. The efficacy and speed of antibody discovery, as well as the quality of anticipated antibody candidates, are intended to be enhanced by these techniques, which are based on new IBM foundation model technology.

The defined targets are designed with antibody candidates using IBM’s foundation model technologies, which have proven effective in producing biologics and small molecules with relevant target affinities. AI-enhanced simulation is then used to screen the antibody candidates and select and refine the best binders for the target. The antibody candidates will be produced at mini-scales and evaluated experimentally by Boehringer Ingelheim as part of a validation process. Subsequently, the outcomes of the lab trials will be applied to enhance the in-silico techniques through feedback loops.

Boehringer is creating a cutting-edge digital ecosystem to facilitate the acceleration of medication discovery and development and to generate new breakthrough prospects to improve the lives of patients by working with top academic and industry partners.

Generative AI in Therapeutic Antibody Development

Additionally, IBM is using foundation models and Generative AI to speed up the discovery and development of new biologics and small chemicals, and this study is the latest in this endeavor. Earlier in the year, the business’s Generative AI model accurately predicted the physico-chemical characteristics of tiny compounds that resembled drugs. 

Pre-trained models for drug-target interactions and protein-protein interactions are developed using a variety of heterogeneous, publically available data sets by the IBM Biomedical Foundation Model Technologies.  In order to provide newly created proteins and small molecules with the required qualities, the pre-trained models are subsequently refined using particular confidential data belonging to IBM’s partner.

Concerning Boehringer Ingelheim

Innovative treatments that change lives now and for future generations are being developed by Boehringer Ingelheim. As a top biopharmaceutical business focused on research, it adds value through innovation in highly unmet medical needs. Having been family-owned since its founding in 1885, Boehringer Ingelheim adopts a long-term, sustainable viewpoint. The two business groups, Human Pharma and Animal Health, employ more than 53,000 people to service more than 130 markets. Go to www.boehringer-ingelheim.com to learn more.

Regarding IBM

IBM is a top global supplier of Generative AI, hybrid cloud, and consulting services. They assist clients in over 175 countries to acquire a competitive advantage in their respective industries, optimize business processes, cut expenses, and capitalize on insights from their data. IBM’s hybrid cloud platform and Red Hat OpenShift are used by over 4,000 government and business institutions in critical infrastructure domains including financial services, telecommunications, and healthcare to facilitate their digital transformations in a timely, secure, and effective manner.

IBM clients are given open and flexible alternatives by IBM’s ground-breaking advances in artificial intelligence (AI), quantum computing, industry-specific cloud solutions, and consultancy. IBM has a strong history of upholding integrity, openness, accountability, diversity, and customer service.

Read more on Govindhtech.com