Jump to content

Proxmox VE OSX Guide discussion


fabiosun

Recommended Posts

Here is my VM settings for BigSur, I only use BS now.

args: -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" -cpu host,vendor=GenuineIntel,+invtsc
balloon: 0
bios: ovmf
bootdisk: ide0
cores: 64
cpu: host
efidisk0: local-lvm:vm-100-disk-0,size=4M
hostpci0: 23:00,pcie=1,x-vga=1
hostpci1: 01:00.0,pcie=1
hostpci10: 25:00.3,pcie=1
hostpci11: 46:00.0,pcie=1
hostpci2: 02:00.0,pcie=1
hostpci3: 43:00.0,pcie=1
hostpci4: 44:00.0,pcie=1
hostpci5: 47:00.0,pcie=1
hostpci6: 48:00.1,pcie=1
hostpci7: 4b:00.0,pcie=1
hostpci8: 04:00.3,pcie=1
hostpci9: 48:00.3,pcie=1
localtime: 1
machine: q35
memory: 122880
name: BigSur
numa: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=6623598a-7a98-4b99-8229-e44ed0d3568c
sockets: 1
tablet: 0
vga: none
vmgenid: d3eb685a-544e-4936-b26a-89f3e0fec696

Here is my EFI, I am booting with host in VM config and kernel collections method like vanilla.

 

EFI.zip

Link to comment
Share on other sites

1 minute ago, fabiosun said:

2064860451_ScreenShot2020-07-17at18_43_02.png.3adefe47647ba4c4764039d545b354df.png

maybe you can boot also without these two lines?

in my case no need with latest OC master release

 

That is still there because I am still using the same config.plist I was using when OpenCore had to use prelinkedkernel booting method. Its not needed anymore, just haven't removed it from the config.plist. It has no effect since there is nothing adding the `booter-fileset-kernel` variable to NVRAM anymore.

  • Ok 1
Link to comment
Share on other sites

  • Moderators
2 hours ago, fabiosun said:

@iGPU also with latest kext (lilu) And drivers?
if so it is weird

yesterday I have reinstalled from high Sierra latest b2 and I have had success with my old 060 Efi with host in Vm 

you can also try with:

AvoidRuntimeDefrag = NO if you have it previously to YES

 

Gigabyte has updated Designare bios to Agesa 1.0.0.4?

 

I'm on the latest BIOS for the MSI TRX40 mobo; it has Agesa 1.0.0.4. This BIOS was released in May, 2020. And yes, I'm quite obsessive about keeping kext files up-to-date, so are are the latest.

 

In fact, I just updated OC to v060 17 July release this morning (using Pavo's OpenCore Builder!).

 

(And thank you Pavo for clearing up my confusion with OCBuilder and kext files: I had not re-activating the Command Line pop-up on Pref/Location field inside XCode. It got turned off after the BS ß2 update.)

Edited by iGPU
Thanks to Pavo.
Link to comment
Share on other sites

14 minutes ago, fabiosun said:

so maybe also this: AvoidRuntimeDefrag should be set to No now.. in my case is on NO

 

Actually I can boot now with all of Booter > Quicks as disabled.

1446038548_Screenshot2020-07-1713_04_49.png.8bc2e537930ae98c67cd4436fa9383c3.png

Link to comment
Share on other sites

  • Moderators

Pavo,

 

I still cannot boot into BS Recovery. I did set UEFI/APFS/JumpstartHotPlug to enable, but it loops out. (I'm at work and I now forget the exact, one line error I see.) Any suggestions?

 

And one more question. There is talk on some forums about deleting APFS snapshot disks. Do you think this is useful or necessary?

 

Thanks for your input.

Edited by iGPU
Link to comment
Share on other sites

12 minutes ago, iGPU said:

Pavo,

 

I still cannot boot into BS Recovery. I did set UEFI/APFS/JumpstartHotPlug to enable, but it loops out. (I'm at work and I now forget the exact, one line error I see.) Any suggestions?

 

And one more question. There is talk on some forums about deleting APFS snapshot disks. Do you think this is useful or necessary?

 

Thanks for your input.

I am not sure why you have the issue with booting into Recovery, I have UEFI/APFS/JumpstartHotPlug enabled and that was all I needed to boot into Recovery. I suggest not removing anything that is vanilla built by the installation process. These extra changes that are made by the installation process are made for a reason, we might not understand what that reason is right now, but they are there for a reason. 

  • Like 2
Link to comment
Share on other sites

  • Moderators
49 minutes ago, Pavo said:

I am not sure why you have the issue with booting into Recovery, I have UEFI/APFS/JumpstartHotPlug enabled and that was all I needed to boot into Recovery. I suggest not removing anything that is vanilla built by the installation process. These extra changes that are made by the installation process are made for a reason, we might not understand what that reason is right now, but they are there for a reason. 

 

Thanks! Since we have same mobo, CPU and GPU, I'll give your EFI a try later tonight and see if I can get into Recovery. (I too am using BS over Catalina; it already seems better.)

 

I was thinking along the same lines about those file removals: why remove them when we have no clear idea as to why Apple has them there.

 

Again, thanks for all your work. I really like using OpenCore Builder.

Link to comment
Share on other sites

No problem, but to be honest I really don't see a need to boot into Recovery. With the way our systems are setup as a VM, there really is no need at all to boot into Recovery to perform any actions.

Link to comment
Share on other sites

  • Supervisor

on amd-osx discord a user (ᕲᑘ'ᓰSᒪᓰᘉᘜᖇ) shared a link with latest kernel useful for Proxmox:

 

https://github.com/fabianishere/pve-edge-kernel

 

tested by now without problem, maybe it could be useful to solve reset bug in AMD Gfx? (I do not know)

 

Last login: Sat Jul 18 05:56:24 CEST 2020 on tty1
Linux pve 5.7.8-1-zen2 #1 SMP 5.7.8-1-zen2 (Fri, 10 Jul 2020 15:39:00 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pve:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.7.8-1-zen2)
pve-manager: 6.2-9 (running version: 6.2-9/4d363c5b)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.0-11
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-10
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-10
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-9
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
root@pve:~# 

 

  • Like 1
Link to comment
Share on other sites

  • Moderators
22 hours ago, Driftwood said:

@iGPU Does your Aquantia work in the vm like fabiosun's?

 

 

No, it was problematic, so I leave Aquantia for Proxmox and pass I211 Intel. My network is only 1GB, so no loss I suppose.

Link to comment
Share on other sites

  • Moderators
On 7/17/2020 at 11:39 AM, Pavo said:

No problem, but to be honest I really don't see a need to boot into Recovery. With the way our systems are setup as a VM, there really is no need at all to boot into Recovery to perform any actions.

 

I've gotten your EFI to boot and load. Thanks!

 

VM is set to 'host' (basically using your VM) and the Kernel/Patch section only contains two entries (what I've previously referred to as "combination #1 and #3 (leaf7)". Additionally, you're using an Emulate entries that I'd earlier removed but am using once more (see spoiler).

 

Spoiler

Kernel-Emulate-Patch-BS.png.6d4c13aabdbd7090e520ec75af997457.png

 

I had to make a few changes to SSDT-PCI and SSDT-GFX since I'm running two Radeon VIIs (and my NVMe drives only appear within the SF0 device; yours would appear to be populated elsewhere), but basically same as what you'd uploaded.

 

However, I still cannot boot into Recovery (10.16). When I select the Recovery drive from the OC menu (shown below), the screen changes to a black screen which in the upper left has the message: "OCB: LoadImage failed - Unsupported". Then the message disappears and it loops back to the OC menu selection.

 

This must either be an OpenCore problem or perhaps the BS Recovery partition is corrupted. Since I can boot into the adjacent Catalina Recovery (10.15.6), this probably supports the idea that the BS Recovery is corrupted. (The next beta update should also update the Recovery partition.)

 

OC-BS-BootMenu.jpg.40ca5354d1e880abf6034a12316bf10a.jpg

Edited by iGPU
Link to comment
Share on other sites

  • Supervisor

Put your config, in particular way I would like to see your apfs config section

thank you

i am asking because I have had same error before to modify a part of it

17 minutes ago, iGPU said:

 

I've gotten your EFI to boot and load. Thanks!

 

VM is set to 'host' (basically using your VM) and the Kernel/Patch section only contains two entries (what I've previously referred to as "combination #1 and #3 (leaf7)". Additionally, you're using an Emulate entries that I'd earlier removed but am using once more (see spoiler).

 

  Hide contents

Kernel-Emulate-Patch-BS.png.6d4c13aabdbd7090e520ec75af997457.png

 

I had to make a few changes to SSDT-PCI and SSDT-GFX since I'm running two Radeon VIIs (and my NVMe drives only appear within the SF0 device; yours would appear to be populated elsewhere), but basically same as what you'd uploaded.

 

However, I still cannot boot into Recovery (10.16). When I select the Recovery drive from the OC menu (shown below), the screen changes to a black screen which in the upper left has the message: "OCB: LoadImage failed - Unsupported". Then the message disappears and it loops back to the OC menu selection.

 

This must either be an OpenCore problem or perhaps the BS Recovery partition is corrupted. Since I can boot into the adjacent Catalina Recovery (10.15.6), this probably supports the idea that the BS Recovery is corrupted. (The next beta update should also update the Recovery partition.)

 

OC-BS-BootMenu.jpg.40ca5354d1e880abf6034a12316bf10a.jpg

 

  • Like 2
Link to comment
Share on other sites

  • Moderators
46 minutes ago, fabiosun said:

Put your config, in particular way I would like to see your apfs config section

thank you

i am asking because I have had same error before to modify a part of it

 

 

Attached is a config.plist file for OC, v060 17 July, which is derived from Pavo's recent upload. (The PlatformInfo section was redacted.)

UEFI-APFS.png.03a6997f678cadcd38ef052feb3025ed.png

 

config-NoPlateformInfo.plist.zip

 

 

(I'm going off line for a few hours: converting both Radeon VIIs to water-cooling...)

 

Edited by iGPU
screen shot added as requested by fabiosun
  • Like 1
Link to comment
Share on other sites

19 hours ago, fabiosun said:

on amd-osx discord a user (ᕲᑘ'ᓰSᒪᓰᘉᘜᖇ) shared a link with latest kernel useful for Proxmox:

 

https://github.com/fabianishere/pve-edge-kernel

 

tested by now without problem, maybe it could be useful to solve reset bug in AMD Gfx? (I do not know)

 


Last login: Sat Jul 18 05:56:24 CEST 2020 on tty1
Linux pve 5.7.8-1-zen2 #1 SMP 5.7.8-1-zen2 (Fri, 10 Jul 2020 15:39:00 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@pve:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.7.8-1-zen2)
pve-manager: 6.2-9 (running version: 6.2-9/4d363c5b)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.0-11
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-10
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-10
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-9
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
root@pve:~# 

 

I have been running both the Zen 2 and General 5.7.8  versions of these for the past week. They won't solve the reset bug. As you know, the power on/off patch also doesn't work for MacOS VFIO and 5.7.8 is unlikely to change that (although I did not try to patch in 5.7.8). The problem with the reset issue is that AMD is unwilling to share details on how to bypass on current generation GPUs, but fingers crossed they will address that with next. 

 

The pve-edge-kernel should be buildable against 5.8 too, if you change the submodule repo. If there's any chance on solving the reset bug, 5.8 may have a slim one, but I'm doubtful. I built 5.7.8.1 myself, so if anyone's interested I could try to build against the 5.8 rc5 sha.

 

Re: my earlier note on trying Big Sur on 3990x. For some reason I couldn't download it from Apple with my developer account. Does anyone know how to enroll in the beta?

 

Since others are running 3970x (which is a perfectly fine CPU for this setup), here are my stock (I can't overclock as I run air cooling) 3990x Geekbench numbers under 5.44-1/2 mainline. Curiously, 5.7.8 would consistently score a 1000 points lower in multicore, but same or marginally better single core. Not sure what to make of it, but bottom line is don't expect 5.7.8 to improve performance for you VM CPU-wise. Unless you have an issue with your current hardware, I wouldn't recommend it.

 

https://browser.geekbench.com/v5/cpu/2974875

  • Like 1
Link to comment
Share on other sites

On 7/17/2020 at 2:00 AM, iGPU said:

I got VM 'host' to boot in Big Sur (even with Radeon VII), but...

 

Whats your latest AGESA BIOS firmware on the MSI board?

 

@meina222 Nice benchmark though not massively greater than the 3970x. Can you fill in your profile for board and other specs ? I know you are 3990X.

 

@iGPU Really interested in how water cooling the Radeon VIIs goes for you. What coolers did u use? These: https://www.ekwb.com/shop/catalogsearch/result/?q=radeon+vii. ?

Edited by Driftwood
Link to comment
Share on other sites

  • Moderators
6 hours ago, Driftwood said:

@iGPU Really interested in how water cooling the Radeon VIIs goes for you. What coolers did u use? These: https://www.ekwb.com/shop/catalogsearch/result/?q=radeon+vii. ?

 

Yes, I used EKWB. They're no longer being made, but I found 2 (one two months ago and the 2nd last week; unfortunately this one has no LED connections).

 

[I actually have another cooler plate for a Radeon VII made by BYKSKI (who has also stopped making them). I wanted to have both GPUs with same plates, so I waited a before doing the conversion until I found a 2nd EKWB. I will re-sell the BYKSKI plate on eBay at a later date; it is NIB.]

 

I'm attaching photos of the set-up. First, assembly of the GPU plates, then the water loop connection with external testing and finally the internal placement. Two GPU cooling is a little trickier as a coupling is needed between the two GPUs and a third hand would have been helpful during the install.

 

Initial photos show take-down of GPU from back side: back plate removal, removal internal screws, removal of mounting bracket screws, and disconnecting cables.

Spoiler

BackplateRemoval.jpg.cda69aa165b8469367cd53aa2df8f5d6.jpg

 

BackGPU_screwsRemoval.jpg.9272fa38d68ef005f776e5146cde1feb.jpg

 

6 screws on back of bracket need removing (and there are 2 more internal screws; keep those, see bottom of post for re-use):

RemovalOrgMntBrkt.jpg.1212bc2efa44d41f57a1896c19816bdf.jpg

 

2 connectors need to be disconnected:

ConnectorsDetail.jpg.ad90d378a7e5f440a7b9e22972efd39f.jpg

 

Next, the GPU chips need to be cleaned and heatpads positioned for connection to new copper frontplate.

Spoiler

CleanGPUchips.jpg.576c3ea9e447a4bc8e00de89f2dfaf81.jpg

 

chips clean and ready for grease (heatpads were in good condition, so re-used):

CleanedGPUchips_heatpads.jpg.053f1dc8eb069061e8336c625d1f4132.jpg

 

Front panel ready for assembly to GPU (grease not yet applied); copper plate will be flipped 180 to fit):

FrontPlateInstall.jpg.ecf6c935159eb5d342f63ba215e41162.jpg

 

Finally, comes addition of a back plate. This is optional but provides better heat transfer:

Spoiler

 

Actual back plate not shown here, but they can be seen on image below that shows the filling/leak checking stage):

BackplateHeadPads.jpg.50592095dd0914d61657ef1b9cb4060a.jpg

 

Appearance of front (top plate) after assembly. (Upper left shows Noctua grease that was used.) The screws shown below were pre-assembled.

TopPlateFinished.jpg.36f75b3abe3d1531e9362c5af43d87c4.jpg

 

After assembly of each GPU water-cooling plates, they were connected with a sliding coupler (BYKSKI X41; 41mm allows connection for GPUs in slots 1 & 3). I use slip-on, quick connectors for ease of assembly. The tubing is 8mm ID and 10mm OD (purchased from UK). Below the entire loop is filled and then run to test for leaks. Nothing leaked from the start! The radiator is a thick 280mm (alpha-COOL) that uses two 140mm Noctua fans. This stage ran for a couple of hours. (A cheater plug is connected to the main PS connector; this prevents mobo from powering up).

Spoiler

 

Note rear panel on each GPU cooler: they are the on the bottom, farthest from the radiator. The pump is by BYKSKI.

 

The yellow-labelled knob-like structure on the top-right of the radiator is an air-vent plug. Push the center and it vents out air.

 

ExtFilling.jpg.18c99ea97d4e254a5cbabdb030f811ab.jpgN

 

After the above is completed, only then is the cooling loop placed inside the chassis. However, it was leaked tested again and after another hour or so of testing, the mobo was finally powered up as shown below. The LED cable for the top GPU has not yet been connected (an extension is needed). It will light up. As mentioned, the other GPU cooler had no cables, so I don't think it will ever light up).

 

The CPU cooler has its radiator (360mm) on the top, the GPU's on the side. The front three 140mm fans are for intake. On the rear is a 140mm exhaust fan. All radiators have their fans pushing air out of the case, so as not to internalize any hot air. (I purposefully chose a case in which I could maximize use of 140mm fans for their greater air flow while using reduced speeds with less noise.)

 

RunningGPUs.jpg.08ad3ff4391669959841b2455cce8dd7.jpg

 

Edited by iGPU
  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • There are no registered users currently online
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.