Archive - Historical Articles

You are viewing records from 10/22/2025 22:57:06 to 11/11/2025 15:42:30. I'll be adding support for selecting a date range in future.

The latest version of .NET is now out and available to download along with Visual Studio 2026. The update to .NET 10 adds significant performance improvements including things like automatic use of AVX instructions where available, and a significant improvement in process startup time.  There's also some language additions that make things more succinct.

It's a little early to upgrade production things, but I've already updated this website - I had to use pre-release versions of the Postgres EF naming conventions Nuget package as it's not yet compatible with version 10 yet, it looks like they hardcoded a max version number into the .NET 9 version.  Hopefully I'm just a day early and they'll be out by the official release date.

The new version of Visual Studio 2026 looks really slick, it's quite a good incremental improvement with a lot of Copilot features, first party support for Podman and few good UI and debugging improvements - in particular showing the contents of parameter variables and if condition results at breakpoints without needing to manually dig into them.

I also see that Jetbrains are hoping to have an updated version of their IDE, Jetbrains Rider that's .NET 10 compatible out tomorrow too.

Permalink 

I was fighting with unsloth / Nvidia Cuda / Python versioning today. I eventually gave up on unsloth and managed to use pytorch directly - however as part of the journey I found that Microsoft, Nvidia and Linux have made GPU paravirtualisation so smooth it's invisible.

First of all, it just works in WSL - you can just install the Nvidia drivers (inside WSL) and magically get full GPU support, with no prior configuration of how much of the GPU you are taking, it's shared dynamically and very smoothly.  You can just use the GPU as if it's local once you've installed the drivers.

Which is clearly intended to handle container tech and brings us on to the Nvidia container toolkit which provides the local libraries for the cotnainers, and Podman GPU support.

  • Install CUDA and the Nvidia drivers on the host
  • Install the CUDA and Nvidia drivers on the WSL instance / VM, or even the Nvidia Container Toolkit.
  • Install the Nvidia Container Toolkit on the docker machine/podman machine in WSL ("podman machine ssh" to get into it)
  • Use "--gpus all" on podman when running a container to enable the GPU's to be passed through to the container!

Overall I was very surprised, as Nvidia historically put a lot of road blocks on paravirtualisation of their cards.  It's great to see this working.

Regarding the situation with unsloth, I found it was breaking even though the libraries were installed - and think different bits have different dependencies.  It could do with some re-engineering to guarantee it's consistency, even their own containers weren't working and I tried several tags.

Instead, I created my own micro-environment to work in with a Dockerfile based on Nvidia's pytorch package:-

FROM nvcr.io/nvidia/pytorch:25.09-py3
EXPOSE 8888
WORKDIR /app
RUN pip install jupyter
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root"]

Once this had been built with a quick:-

podman build -t aiplatform .

I could run the container using

podman run --gpus all -p 8888:8888 aiframework

And then access a Jupyter workbook with a fully functional CUDA enabled pytorch in it.  This allowed me to get on and sort out what I wanted (I'm porting some of my old AI tech into the current LLM model ecosystem).

One thing I did find is that if I used DockerCLI I wasn't able use the --gpus parameter, if I wanted to use the docker command, I had to remove DockerCLI and then I had to symlink podman to docker with a quick:-

mklink "C:\Program Files\RedHat\Podman\docker.exe" "C:\Program Files\RedHat\Podman\podman.exe"
Permalink 

I've upgraded the site to the latest .NET version, importantly there was a vulnerability in Kestrel (the embedded webserver) so it's important to keep updated.

This is also a good proof of life for anyone reading.  Hope everyone is well!

Outside of work (for my own company), I'm currently working on building AI related agents and components to build a highly persistant and adaptive AI - this is all the rage so nothing special and there's tons of examples of agentic development going on even though I've been working on it since 1998 on and off.  It's good to see ideas come to fruition gradually even if I'm not always the one making the achievemnt first - hopefully there'll be some achievements worthy of talking about soon.

Inside work (at my day-to-day employer), we're building machine learning for a very specific purpose and maintaing a suite of applications for internal use - there's not much to talk about.

Permalink