• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • I’m glad to hear yours have been holding up! Maybe my friends and I were just particularly unlucky.

    The service manuals are available direct from Dell. For all the laptop’s faults in my experience, I do appreciate that the SSDs are socketed, as are the RAM sticks on the 15. I do also appreciate that Dell sells replacement batteries (and they aren’t glued in either!) as that’s usually the first part to need a swap.


  • I haven’t used the XPS 13 personally but my experience and all my friends’ experience with the XPS lineup is that despite their build quality, they’re quite prone to failure. On my 15, the keyboard failed multiple times, as well as one of the fans and eventually one thunderbolt port, all within a span of 4 years.

    They’re beautiful machines that really should be quality, but in practice for some reason they haven’t lasted for me. On the plus side though, Dell does at least offer service manuals, and lots of parts can be replaced by a user (on the 15 you can easily replace fans, RAM, SSDs, and with some work you can replace the top deck, display, and SD reader).


  • The main benefit I think is massive scalability. For instance, DOE scientists at Argonne National Laboratory are working on training a language model for scientific uses. This isn’t something you can do on even 10s of GPUs for a few hours, like is common for jobs run in university clusters and similar. They’re doing this by scaling up to use a large portion of ALCF Aurora, which is an Exascale supercomputer.

    Basically, for certain problems you either need both the ability to run jobs on lots of hardware and the ability to run them for long (but not too long to limit other labs’ work) periods of time. Big clusters like Aurora are helpful for that.




  • Fair enough! I think it’s more common for games to do that, but sometimes I had trouble with software on Windows that used virtualization elements themself. I probably just didn’t properly configure HyperV settings, but I know nested virtualization can be tricky.

    For me it’s also because I’m on a laptop, and my Windows VM relies on me passing through an external GPU over TB3 but my laptops’ dedicated GPU has no connection to a display, so it would be tricky to try and do GPU passthrough on the VM if I were on the go. I like being able to boot Windows on the go to edit photos in Lightroom, for example, but otherwise I’d prefer to run the Linux host and use the Windows VM only as needed.


  • I’m a fan of dual booting AND using a passthrough VM. It’s easiest to set up if your machine has two NVMe slots and you put each OS on its own drive. This way you can pass the Windows NVMe through to the VM directly.

    The advantage of this configuration is that you get the convenience of not needing to reboot to run some Windows specific software, but if you need to run software that doesn’t play nice with virtualization (maybe a program has too large a performance hit with virtualization, or software you want to run doesn’t support virtualized systems, like some anticheat-enabled games), you can always reboot to your same Windows installation directly.