• People's Choice
      • Back
      • Consulting
          • Back
          • J2EE
              • Back
              • Websphere
          • Collabortation
              • Back
              • IBM Connections
          • OpenSource
          • Kontakt
      • build:skills
          • Back
          • Colaboration
              • Back
              • Admin
                  • Back
                  • W-A-S
                  • WPS
              • AppDev
                  • Back
                  • W-A-S
                  • WPS
                  • Web Experience Factory
          • Kontakt
          • Notes/ Verse
              • Back
              • Admin
              • Development
              • Interfaces
          • OpenSource
          • Literatur
          • Schedules
      • Schedule
      • Cloud
          • Back
          • Container
  • Jobs
      • Back
      • Offers
  • Über uns
  • Support
      • Back
      • FAQs
          • Back
          • Groupware
          • Traveler
          • WebSphere
          • Office
          • OpenSource
          • Other
      • Sonstiges
          • Back
          • Meldungen
          • IBM Infos
          • Lotus
          • WebSphere
          • Redbooks
          • Docker
          • Kubernetes
      • News
          • Back
          • Domino
          • Traveler
          • WebSphere
          • WebSphere Portal
          • Connections
          • Sametime
          • Docker
          • Kubernetes
      • Download
          • Back
          • WebSphere
          • Notes
          • Other
      • Discussion
  • Log in
Entwicklungsbuch
Featured imageFine-tuning a language model doesn’t have to be daunting.In our previous post on fine-tuning models with Docker Offload and Unsloth, we walked through how to train small, local models efficiently using Docker’s familiar workflows.This time, we’re narrowing the focus. Instead of asking a model to be good at everything, we can specialize it:teaching it a narrow but valuable skill, like consistently masking personally identifiable information (PII) in text.Thanks to techniques like LoRA (Low-Rank Adaptation), this process is not

Just published by Docker: Read more

Featured imageDocker Captains are leaders from the developer community that are both experts in their field and are passionate about sharing their Docker knowledge with others.“From the Captain’s Chair” is a blog series where we get a closer look at one Captain to learn more about them and their experiences.  Today, we are interviewing Pradumna Saraf.He is an Open Source Developer with a passion for DevOps.He is also a Golang developer and loves educating people through social media and blogs about various DevOps tools

Just published by Docker: Read more

Featured imageRunning large language models (LLMs) on your local machine is one of the most exciting frontiers in AI development.At Docker, our goal is to make this process as simple and accessible as possible.That’s why we built Docker Model Runner, a tool to help you download and run LLMs with a single command. Until now, GPU-accelerated inferencing with Model Runner was limited to CPU, NVIDIA GPUs (via CUDA), and Apple Silicon (via Metal).Today, we’re thrilled to announce a major step forward in democratizing local

Just published by Docker: Read more

Featured imageThis is part of the Powered by Docker series, where we feature use cases and success stories from Docker partners and practitioners.This story was contributed by Ryan Wanner.Ryan has more than fifteen years of experience as an entrepreneur and 3 years in AI space developing software and is the founder of Open Source Genius. Open Source Genius is a start-up that helps organizations navigate an AI-powered future by building practical, human-centered AI systems.In early 2025, OSG had a good problem:demand.With multiple ventures

Just published by Docker: Read more

Featured imageDevelopers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner.Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility, and cost-effectiveness, making it easier than ever to build and deploy generative AI applications.

About Docker Hub

Docker Hub is the world’s largest registry for

Just published by Docker: Read more

Weitere Beiträge ...

  1. Unlimited access to Docker Hardened Images: Because security should be affordable, always
  2. Docker at AI Engineer Paris: Build and Secure AI Agents with Docker
  3. Llama.cpp Gets an Upgrade: Resumable Model Downloads
  4. From Shell Scripts to Science Agents: How AI Agents Are Transforming Research Workflows
  5. Fine-Tuning Local Models with Docker Offload and Unsloth

Seite 1 von 24

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
© 1999 - 2025 IT Knäpper
  • Nutzungsbedingungen und Disclaimer
  • |
  • Unsere Philosophie
  • |
  • Datenschutz
  • |
  • WIR
Feed-Einträge
Back to top