Somewhat off topic but still highly relevant for people who actually want to use projects like this: why oh why do so many build recipes such as Dockerfiles insist on pulling random stuff off the internet as part of the build process?
For example, the Dockerfile in this project pulls in two Git repositories and a script at build time.
Besides the obvious build failures on heavily sandboxed build servers with no access to the internet, this forces anyone with even a little concern for security to do a full audit of any build recipes before using them, as merely studying and making available the dependencies listed in READMEs and build manifests like requirements.txt, package.json etc., is no longer enough.
I find this a very worrying development, especially given the rise in critical computer infrastructure failures and supply chain attacks we've seen lately.
I really hate it when projects pull build files from the internet. Usually this happens unexpectedly. Besides the security issues that you mentioned, it also means that packaging software that depends on it becomes much more difficult and prone to unpleasant surprises, like when there is a version issue or when there is simply no internet, and of course the worst nightmare is if the dependency is not available anymore.
In this case, the individual did it for their own research purposes for security stuff. I looked at their profile. They have a demo running a modded version of Doom on a John Deere tractor display. This person definitely takes the time to figure stuff out :D .
Well, the correct path forward would be to wait for a large OSS player, like Red Hat, SUSE, Canonical, ..., and make the build secure.
Typically, Fedora and openSUSE have a policy that distributed packages (which includes container images) have to build with only packages from the repository, or explicitly added binaries during the build. So once you can `dnf/zypper install` something (or pull it from the vendor's container registry), you know the artifacts are trusted.
If you need to be on a bleeding edge, you deal with random internet crap shrug.
Of course a random OSS developer won't create offline-ready, trusted build artifacts. They don't have the infrastructure for it. And this is why companies like Red Hat or SUSE exist - a multi-billion dollar corporation is happy to pay for someone to do the plumbing and make a random artifact from the internet a trusted, reproducible, signed artifact, which tracks CVEs and updates regularly.
How is this different from JS pulling in tens of thousands of dependencies to display a web page?
In the 80s we envisioned modular, reusable software components you drop in like Lego bricks (we called it CASE then), and here we have it, success! Spoiler, it comes with tradeoffs...
Probably mostly to retain organization (via separate git repos) - in lieu of cloning stuff in Dockerfile, you end up needing a pre-build instruction of "when you clone use --recursive or do git submodule init to get the other repos into your CWD".
The only chance at GPU acceleration is passing through a supported dGPU (>= AMD RX 6xxx @ 14.x, no chance modern nvidia) with PCI passthrough. Intel iGPUs work up to Comet lake, and some Ice Lake, but anything newer will not work.
Apple Silicon build of MacOS probably not going to be emulatable any time soon, though there is some early work in booting ARM darwin
Also Intel VT-x is missing on AMD, so virtualization is busted on AMD hosts although some crazy hacks with old versions of virtualbox can make docker kind of work through emulation
In theory someone could write a display driver for libvirt/kvm/qemu 3D acceleration, like the ones that exist for Windows and Linux. With those (suboptimal) GPU performance would become available to just about any GPU.
AMD has its own VT-X alternative (AMD-V) that should work just fine. There are other challenges to getting macOS to boot on AMD CPUs, though, usually fixed by loading kexts and other trickery.
I don't really see the point of using Docker for running a full OS. Just distribute an OVA or whatever virtualisation format you prefer. Even a qcow2 with a bash script to start the VM would probably work.
Nope. There's only ever been Intel x86 apple computers so x86 mac software is Intel specific. Most things work fine on AMD, but some things don't work without hacks, such as digital audio workstations, some adobe applications etc. And you can't run hypervisors on an AMD hackintosh, the work around for docker is to install an old version of virtualbox and make it emulate instead.
I encourage you to check out the OSX-PROXMOX project, which fully supports AMD and is designed to simplify these inside-a-VM setups (though not as much as a Docker setup). https://github.com/luchina-gabriel/OSX-PROXMOX
Also, there are a couple of kext projects that allow you to use AMD graphics, even iGPUs, on Hackintoshes. I have not tested this myself, but there are rumblings you may even be able to get this to work with a Steam Deck.
Proxmox uses QEMU and boots opencore, so its the same set of problems. It's great to see NootedRed progress but its currently limited to RDNA2 AFAIK and there are lots of weird graphical issues in some configurations. Intel is unquestionably a lot simpler.
i think for the most part the cpus should be ok. they do have different feature sets but the isa is the same. the platform chipset is a different topic i guess. they dont need to share any logic or semantics between amd/intel as those are controlled by drivers rather than having to execute programmer's machine code directly.
not 100% on this, but x86_64 between amd and intel does share a lot of overlap right? if you dont go too far into extensions perhaps.
id guess vmxon and vmxoff and vmcs structures etc. will still be the same on both? a lot of security stuff etc. is totally different (amd psp vs intel ME etc.)
(still agree ofc, but just thinking about where these differences are located as the cpus can run very similar or the same code)
It mostly works, but virtualisation (even on baremetal) isn't possible at the moment, some applications need special patches, and weird issues here and there for some situations. AMD hacks are a hobby for a lot of people
I set this up a few months ago as an experiment. Worked pretty well until I discovered that for iMessage to work, the application phones home to Apple using your hardware IDs, and this project uses fake values. At that point I started spiraling down the Great Waterslide of Nope, slowly discovering that the fake values are flagged by Apple and they will, as a consequence, flag your iCloud ID as a potential spammer, limiting your access from other devices. Your only option is to use a hardware ID generator script they vaguely link out to, and you can just keep trying values until you find one that "works", but there's not actually a good signal that you found one that works and isn't harming your iCloud reputation.
Worked really great otherwise, though. Very useful in a pinch.
The "keep cycling HWIDs until one works" thing was also common to get Hackintosh iMessage to work, you'd be able to check if it works by going to checkcoverage.apple.com. I quickly realized it's easier to copy the Serial from a old but real Mac.
But I think this tool is more useful for things like build scripts (that rely on proprietary macOS frameworks) more than it is for actually using it like a personal computer.
While at RStudio (now called Posit), I worked on cross-compiling C/C++/Fortran/Rust on a Linux host targeting x86_64/aarch64 macOS. If you download an R package with native code from Posit Package Manager (https://p3m.dev/client/), it was cross-compiled using this approach :)
I did this. I had to share my USB port over Docker somehow (black magic I guess, instructions in the repo) and I was able to build iOS apps and run them on an iPhone.
This would be awesome to run iCloud sync on my homeserver. Currently, there is no good way to physically backup iCloud on a homeserver/nas, because it only runs on windows/apple.
I've been working on a solution here that uses OSX-Docker & OSXPhotos. It's getting there, but I wanted a way to back-up all the info in iCloud, but also include the metadata changes. Turns out that iCloud doesn't update the raw photos. Makes sense, but not helpful for those who do back-ups and expected those changes to be there.
The problem is that I would have to purchase a dedicated desktop with enough storage to hold all my iCloud files and iCloud no longer syncs to external drives, so it’s cost prohibitive to purchase a desktop Mac expressly for that purpose.
Correlium was a big commercial player and made headlines. Anyone using this privately (and especially non-commercially) probably isn't at risk of action from Apple, although I wouldn't be surprised if Apple eventually tries to go after publicly hosted images.
"Illegal" might be a bit strong, "Against the EULA" a bit more realistic, which may or may not be illegal, depending on the context and involved country.
Hosting copyrighted media without a distribution license is usually illegal. Very few countries allow you to just distribute proprietary disk images like this.
You can extract the images yourself from official install media (for instance, the installers you can create from within macOS) and use it for whatever personal project you want; you'd be breaking the EULA, but that doesn't mean much. You're not allowed to throw your copy on the internet, though.
Other projects I've seen download the installer images directly from Apple, something they could probably detect and block if they wanted to. That would probably be completely legal, as nobody is unlawfully distributing the files. This is different; the Docker images contain a copy of macOS.
Apple could probably take this project down any time they want to, but if they wanted to they probably would've already.
I’m not a lawyer, but pretty sure unauthorized redistribution of copyrighted material is a crime (in the US.) This docker image contains Apple copyrighted files, probably, but anyone feel free to explain if I’m wrong.
I think you’re right for the definition of criminal infringement. I still think this image is civilly liable for infringing Apple’s copyright (not a crime as I originally said.)
> The EULA might prohibit redistribution
I don’t think it matters. Copyright law automatically forbids copying. Well, assuming Apple complied with any requirements to have a valid copyright, which seems a safe bet.
My understanding is that commercialization certainly weakens a fair use argument, but that its absence does not automatically make a reproduction and/or distribution fair use.
I suspect that it probably doesn't matter; Apple has generally not cared about Hackintoshes as long as you aren't selling pre-made Hackintoshes. Apple probably doesn't really mind for stuff like this, since this probably isn't realistically eating much into Apple's market.
I really hate when "USB Passthrough" is used in situations when, at best, a "USB over ethernet proxy" is what is happening. That's not passthrough... It introduces a whole range of disadvantages that regular passthrough does not (and advanced passthrough might not) have.
Eh? QEMU USB passthrough is true USB passthrough. The problems with USB passthrough stem from issues related to USB controllers themselves and how device enumeration works, with the only better solution being PCIe passthrough of entire USB controllers... Which then present a different set of problems. Speaking from experience in large VM test farms with a significant amount of forwarded hardware.
(However, "USB over ethernet proxy" is also a true passthrough, just one with higher latency than VirtIO.)
I skimmed the README only and just saw the big section of USB over ethernet with the video image and everything, not the tiny mentioning of VFIO above it. Lol.
But tell me please, which problems do you have with PCIe passthrough?
Also speaking from experience in large VM test farms with a significant amount of forwarded hardware. I've never experienced problems with hundreds of machines doing exactly this, for years.
1. VMs operate on a copy of certain PCIe descriptors obtained during enumeration/when forwarding was setup, meaning that some firwmare updates that depend on these changing cannot work correctly. The exact details have left my memory.
2. Foo states that only happen when forwarding. Hardware that seems so stable when used directly that bugs would seem inconceivable enter into broken states when forwarded and fail to initialize within the VM.
Hardware and drivers are both full of bugs, and things become "fun" when either get surprised. You can deal with it when you're doing the forwarding your own hardware and using your own drivers so discovered issues can be debugged and sorted out, but it's much less fun when you're forwarding stuff from other vendors out of necessity.
Dealt with this one just this morning.
3. Reset bugs. Hardware reset and sequencing is a tricky area (speaking from old FPGA experience), and some devices cannot recover without a full power cycle.
In some cases, I can recover the device by stopping the forward, removing the device (echo 1 > /sys/bus/pci/devices/.../remove), rescanning and letting the host kernel temporarily load drivers and initialize the device, and then forward it again. Did that today.
4. Host crashes. Yay.
Forwarding a single device on a user machine that still gets regular reboots tends to work fine, but things get hairy when you scale this up. I've had to do a lot of automation of things like handing devices back to the hypervisor for recovery and firmware management.
Strange...
Sounds like you may be doing too many things manually or that what you're testing is the device that is connected directly to USB?
In my case I need 3rd party USB devices (that always just work(™)) to communicate and interact with hardware. Been automating/running literally hundreds of these configurations without a single issue related to USB or PCI passthrough.
Even got switchable HUBs for USB in the mix sometimes, too (for power cycling specific USB devices). Works fine as well.
"Manually"? There is only QEMU/KVM, how many layers you put in between does not matter. Proxmox is just a pile of perl scripts doing the same.
My experience is in testing both USB downstream devices and PCIe devices developed in-house. Some of the forwarded devices might be 3rd-party devices like hubs, relays for power cycling and USB isolators to simulate hot-plug, but the DUTs are stuff we manufacture.
In the USB test scenarios (we have about ~100 such machines, on average connected to a dozen DUTs, some more), the symptom of failure is generally that the entire controller can discover downstream devices but permanently fail to communicate with any of them, or that the controller itself fails to initialize entirely.
The PCIe test scenarios is not something I actively work with anymore, but involves a server room full of machines with 4-7 DUTs each and much more custom handling - such as hot-unplugging the device from the VM, resetting and firmware updating the device, and hot-plugging it back as part of the test running in that VM - as testing PCIe devices themselves exercise many more issues that you don't see with standardized hardware.
I have done this for about a decade, so I've been through a few iterations and tech stacks. One can find things that work, but it's not in any way or form guaranteed to work.
Yeah, isochronous mode is unfortunately not supported for USB passthrough on Proxmox. There were experimental implementations in oVirt back in the days (that is: experimental implementations in a non-prod, only-for-evaluation solution...).
So, to clarify things: it's QEMU running in a container, and macOS running under QEMU inside it.
This is really nice WRT the ease of installation: no manual setup steps and all.
This likely expressly violates the [macOS EULA], which says: «you are granted a limited, non-exclusive license to install, use and run one (1) copy of the Apple Software on a single Apple-branded computer at any one time» — because the point is to run it not on a Mac. So, pull it and keep it around; expect a C&D letter come any moment.
(iii) to install, use and run up to two (2) additional copies or instances of the Apple Software, or
any prior macOS or OS X operating system software or subsequent release of the Apple
Software, within virtual operating system environments on each Apple-branded computer you
own or control that is already running the Apple Software, for purposes of: (a) software
development; (b) testing during software development; (c) using macOS Server; or (d) personal,
non-commercial use
So basically you can run macOS however you want as long as you're already running macOS on Apple hardware.
The question I've always had is how enforceable is that really? Obviously the whole point of Apple making macOS freely available is to run it on Apple hardware. They don't give it out for free to run on other hardware but can they really do anything about that other than require you to enter a serial number to download an image? If they really cared, they would just do something like hashing the serial number and current date and time against a secret key (maybe inside a read-only portion of the TPM) and only Apple would be able to verify that the hardware is legit. You would need to somehow expose the TPM to the hypervisor to be able to generate hashes for macOS to verify it's license. Clearly this is not a huge problem for Apple because they would already be doing this if it was an issue.
It’s sort of enforceable - Apple’s own virtualisation framework that lots of VM providers use (on Apple Silicon) actually enforces a hard cap of two guests, and won’t allow you to spawn more.
With other hosts, it’s kind of an Adobe approach - you either weren’t gonna buy a Mac anyways, or you might be tempted to buy a Mac after using macOS in a VM. Realistically, it’s not worth Apple coming after you unless you’re an enterprise making your money by breaking the EULA.
Apple branded machine. I got some of those nifty apple stickers to brand machines. MacBooks and Macpros as well as iMacs have serial numbers. They already have their whole arm ... I will just remind them again. Do not anger Linux wizards. They put Linux on an iPod.
Probably as enforceable as any other EULA. Windows surely has similar language. I'd guess that somewhere buried deep in the agreements, or somewhere, it says they can audit your usage somehow. Does it ever happen? I'd be curious to know.
Windows doesn’t have similar language. Not directly, anyway. Depending on the edition of Windows you purchase and how your overall license agreement works, you get anywhere from zero to ten VM licenses per paid Windows license.
I’m omitting a few details for brevity (MS licensing is nuts when you get into the weeds).
Oh thank God! Now I have use for my macplus badge. Just cut and glue, and vola apple branded. It was running system 6, so Snow leopard. I did install QuickTime and it basically destroyed my windows, but it's a port of the MacOS. This elua has more holes in it than a windows 95 login screen. I am in control.
Indeed. That would cover a conventionally installed VM, like VirtualBox.
But this is packaged as a Docker image, and Docker is Linux-specific. Linux is not officially supported by Apple on their hardware, and is certainly not prevalent on it. I doubt that the intended target audience of this project is limited to Asahi Linux.
Docker actually ships their easy-to-use and commercially supported Docker Desktop product for macOS, which uses Apple's standard virtualization framework under the hood. I think it then runs the Docker containers within a Linux VM that it manages.
For people who want an open-source CLI solution rather than a commercial product which for larger businesses requires payment, there's also colima which does roughly the same thing.
So, lots of people very successfully use Docker on macOS, including on Apple hardware.
This particular software would need nested virtualization to be highly performant, but at least on M3 or newer Macs running macOS 15 or newer, this is now supported by Apple's virtualization framework:
So, if that's not easy to do in a useful and performant way now, it will absolutely be possible in the foreseeable future. I'm sure that the longtime macOS virtualization product Parallels Desktop will add support for nested virtualization quite soon if they haven't already, in addition to whatever Docker Desktop and colima do.
(Tangent: Asahi Linux apparently supports nested virtualization on M2 chips even though macOS doesn't.)
Running Linux in a VM (for Docker) to run an emulator (QEMU) in it to run macOS in that looks to me like a senseless waste of resources. Linux and Docker add no value into the mix here.
The same result can be achieved by running macOS right in the VM. This can be extra efficient since both the host OS and the guest OS are macOS, and the VM could use this fact.
It may make sense to run macOS in an emulator like QEMU under macOS, if the host version us ARM and the guest version is x64 (or vice versa). But I don't see where Linux and Docker would be useful in this case.
I agree the particular combo I was discussing is likely not very useful when compared to just directly virtualizing macOS directly, except in niche cases.
One such case, however, is when the user is already managing Linux Docker containers for other parts of their development or testing workflow and wants to manage macOS containers with the same tooling. That’s legitimate enough, especially when it ends up supporting nested virtualization of the same architecture and not true emulation, to keep the performance penalty modest enough.
Docker can run on macOS (albeit in a VM), but its still running on a Mac "that is already running the Apple Software". So its a perfectly valid option for Mac owners, even if its a VM + container + VM deep.
"""
"Corporate Headquarters has commanded," continued the magician, "that everyone use this workstation as a platform for new programs. Do you agree to this?"
"Certainly," replied the master, "I will have it transported to the data center immediately!" And the magician returned to his tower, well pleased.
Several days later, a novice wandered into the office of the master programmer and said, "I cannot find the listing for my new program. Do you know where it might be?"
"Yes," replied the master, "the listings are stacked on the platform in the data center."
"""
Back in 'ye olden days, people used to print out programs... nay, they even used to _hand-write!_ programs before they began typing, because keyboard-time was valuable (nevermind compilation/computation/debugging time).
Serial ports were slow, grep wasn't really a thing, so having a printout (or "listing") of your program was a more efficient way (or only way!)to debug your program after the fact. https://www.youtube.com/watch?v=tJGrie7k97c
Back in the 90's, I had some programming classes in high school where there were 30 chairs, but 15 computers (around the edge)... bring your own 360kb floppy disk! So you had a real incentive (and a strict teacher) who insisted that you wrote out your program ahead of time, show it to her for a first-pass/feedback, and _then_ you'd get to go type it on the computer and see if it worked. Submissions were via printouts (of the program, aka "listing", along with the output) which she then took home and graded.
Stick tongue firmly in cheek, empty your cup, and enjoy the ride!
Edit: ...and the relationship to the cantankerous original comment who "couldn't figure why they'd want to run OSX?", this is the zen-koan sarcastic response of: "use it as a platform for development" (ie: stack your papers on top if it)
> I doubt that the intended target audience of this project is limited to Asahi Linux.
I guess that part of the license is meant to automatically disqualify an apple branded computer running a linux distro as host OS from running MacOS in a VM:
"on each Apple-branded computer you own or control that is already running the Apple Software"
Some smart ass might argue that "already running the Apple software" doesn't mean at the exact same time but more like "I am still running it sometimes as dual boot" but I am not sure this would pass the court test.
And since I believe docker on MacOS runs on linux VM, so this would be running qemu on top of a linux vm on top of MacOS.
I can't see any legit use of this. Anyone who would need automatized and disposable environments for CI/CD would simply use UTM on mac minis:
https://docs.getutm.app/scripting/scripting/
> you are granted a limited, non-exclusive license to install, use and run one (1) copy of the Apple Software on a single Apple-branded computer at any one time»
In that case... If I run Asahi Linux on my apple-silicon macbook pro as main operating system and then run macOS in a container I should be fine.
See the rest of the license, the host must be "already running the MacOS operating system" which I understand as host OS, not as capable of still running it because the sshd hasn't been wiped of a MacOS install.
To be fair, I'm not sure any linux veterans are using Ubuntu. Its a popular OS, but its not a good OS. (Think terrible pop music that teenagers will still listen to)
Even Debian has lost its favorability by having sooo much legacy bloat, bugs, and outdated kernels that wont run Nvidia GPUs(2023) or other recent peripherals.
I'd be much more curious how Fedora or OpenSUSE hold up.
"Sorry a decade of use is not enough to be considered an expert. Also your experience is useless because it's on a distro I don't like."
This is just pointless gatekeeping doubled down on at this point. People can be experts and use Kubuntu. People can be veterans and use Ubuntu. People can be absolute beginners and use Arch or OpenSUSE or literally any other distro. Use of distro is in no way shape or form indicative of experience other than that some are easier to get started with for absolute beginners than others. But that doesn't make them any less good.
It's a personal choice with each options having its own pros and cons. Not some indicator of experience or knowledge.
What is it that makes one a "Linux expert"? Knowing bash/awk well? Embracing the pain that some other distros are? Using Vim? If it's any of those then I'm definitely no expert, as I primarily use Python whenever bash starts to get even a bit complex, selected Kubuntu because I didn't have to deal with a bunch of source issues I had with Ubuntu (due to licensing; also avoided Arch as I heard it's a nightmare, but occasionally work on a CentOS box as part of my job), and do almost everything re text in Emacs.
I’ve been continually using Linux for various purposes since the late 90s, and recently wrote a non-trivial kernel module for an embedded device. So I’m veteran-ish.
I tried Ubuntu on my MBP because I thought its popularity would mean the best chance of things working out of the box. I’m long past having time to spend on getting basic things working.
Can I get a whole bunch of Apple stickers and brand the heck out of an old Dell r630 Server and run this on it? Or how about a cattle brand with an Apple logo?
Hackintosh has been around for almost 2 decades and AFAIK Apple hasn't threatened legal action on anyone except those trying to profit monetarily from it (the only one that comes to mind being Psystar).
Apple now even publicly distributes macOS from its site with no authentication required, something that certainly wasn't true in the early days of Hackintosh.
Given that some Hackintoshers may be doing it for the purposes of "security research" (bug bounty chasing), which indirectly benefits Apple, I don't think they will change the unsaid stance anytime soon.
On the other hand, its attempts at destroying right-to-repair and third-party OEM parts shows what it actually worries about.
> So, to clarify things: it's QEMU running in a container, and macOS running under QEMU inside it.
A bit tangential but is this more performant/"better" than running MacOS on say Hyper-V? I understand my zen 4 laptop anyway won't allow GPU acceleration, I'm only looking to run a few apps (and maybe Safari) on it.
Did it get taken down again? The takedown I remember was a few years ago, and GitHub announced some policy changes to make it harder for that to happen when they very loudly reinstated it:
Fair. But the docker image provider would be in violation, never having received a license to redistribute macOS images. Without these, the seamless usability aspect is gone, though the repo remains pretty useful because it automates all other steps.
It's trivial to build these containers by grabbing the install images from Apple directly. Beyond that this is all covered in the documentation.
I guess I'm curious why you're so focused on this violating anything? Apple clearly doesn't care as folks like myself have used it for years. Apple's target market is hardware buyers, not people who do things like this. If this actually impacted sales, sure - but Apple doesn't sell OSX anymore.
As an aside the sickcodes work is great for people wanting to leverage Apple's "Find My" network with non-Apple devices by leveraging OpenHaystack [0].
I assume EULA is mainly intended for preventing companies from running Hackintosh at a massive scale than aimed at individuals -- although build your business/infrastructure based on Hackintosh is a very questionable business and technical decision by itself.
Yes but the host he's using is Apple Silicon, I think what he's talking about is QEMU using Apple's Hypervisor Framework, which is what vmware Parallels, etc also use nowadays. Booting an apple silicon version of MacOS on non-Apple hardware probably isn't going to possible for a while as it would require emulation.
In 2021 Blackberry, surprisingly, wrote this article about getting emulating the XNU kernel and getting it running on non-apple hardware, but its just a terminal:
Someone would have to write something that can emulate/abstract the apple iGPU to get anywhere near a usable GUI - I'm no expert but I don't think this is going to happen anytime soon, so when Intel releases of MacOS stop happening apple hardware might be the only way to virtualize MacOS for a while
Someone would have to write something that can emulate/abstract the apple iGPU to get anywhere near a usable GUI
I'm not familiar with what Apple's GPU architecture on its ARM SoCs looks like, but wouldn't a framebuffer be sufficient? Or does ARM macOS have absolutely no software rendering fallback and relies on the GPU to handle all of it?
I know that regular amd64 macOS runs fine without GPU acceleration in a VM (like what is shown here), and arm64 Windows likewise with an emulated EFI framebuffer in QEMU on an amd64 host (it's bloody slow, being 100% emulated, but it works well enough to play around with.)
All the sibling comments appear to have missed your requirement of running Big Sur (macOS 11) --- everything based on Virtualization Framework is "paravirtualisation" and requires cooperation between the host and guest.
You could run the amd64 version of macOS 11 in QEMU on the M1, but that's ARM-to-x86 emulation, which will be slow, and I suppose isn't what you're looking for.
It doesn't matter how sluggish this is - all I'm looking is to start Big Sur, open App Store and install one app ( Final Cut Pro ). In other words, looking for a way to download the older version of FCP.
It uses Apples Virtualization framework and works well, besides issues with virtiofs.
But those can be worked around with virtual block devices aka images.
> Apple’s current implementation of lightweight virtualisation still has no support for Apple ID, iCloud, or any service dependent on them, including Handoff and AirDrop. Perhaps the most severe limitation resulting from this is that you can’t run the great majority of App Store apps, although Apple’s free apps including Pages, Numbers and Keynote can still be copied over from the host and run in a guest macOS.
Let’s say I wanted to run a headless Logic Pro for programmatic music production. Would I use this? Or should I containerize the application itself? It’s okay if I have to run it on Apple hardware.
It looks like the "vnc-version" Dockerfiles will set up an Xvnc server and direct QEMU's output to it, and you can connect to that using a VNC client. The standard version sets up X11 forwarding over SSH and/or you can pass the host's X11 socket and corresponding DISPLAY variable directly to the container.
QEMU also has its own built-in remote access capabilities (SPICE and VNC-based) but the former needs guest support.
New to containers. How easy would it be to run only the OSX Reminders and Calendar apps, or as stripped-down as possible a system to get these running without the overhead of the OS? The webapp versions of these are crippled compared to the OSX/iOS apps.
I have an M1 MacBook Air and the iPhone 2022 SE, so far the performance of both is pretty good!
However, the prices are definitely outside my regular budget (needed it for an iOS app project cause of walled garden ecosystem) and I only got the 8 GB MacBook which in hindsight very much feels like a mistake, even with the exorbitant pricing for the 16 GB model.
For the price of the 8 GB model I could have gotten a nice laptop with 32 GB of RAM built in. That said, I don’t hate the OS, it’s all quite pleasant and performs well.
I keep it open 24/7. Where are those forums? Have you ever seen how a forum is organized? Do you think all text based chat windows are forums?
Hint: reddit is sort of a collection of forums. Discord, whatsapp group chats, Slack and other similar things are not, they're just a discardable text chat.
Tell me you don't know what a forum is without telling me you don't know what a forum is :)
Besides the obvious build failures on heavily sandboxed build servers with no access to the internet, this forces anyone with even a little concern for security to do a full audit of any build recipes before using them, as merely studying and making available the dependencies listed in READMEs and build manifests like requirements.txt, package.json etc., is no longer enough.
I find this a very worrying development, especially given the rise in critical computer infrastructure failures and supply chain attacks we've seen lately.