View previous topic :: View next topic |
Author |
Message |
bitter n00b
Joined: 26 Dec 2003 Posts: 24
|
Posted: Sun May 29, 2022 6:47 am Post subject: any conventional wisdom on doing an install in a VM before m |
|
|
Title kinda says it all, mostly. I also posted this on the r/gentoo subreddit, but not everyone goes there. Plus I'm excited about remembering this old account and it's fun to use it again!
I've done the install a hundred times. I'm excited to have gentoo for my main thang again, because everything else just leaves me a little unsatisfied, somehow. I'm sure many of you know what I mean.
But I'd like to keep my system running while I get the gentoo system just right. Nothing complicated, it's a thinkpad x1 carbon 6th gen. Everything works fine, I just miss Portage and the community.
Should I passthrough anything? Carve out and use a raw partition (LVM on LUKS currently), or copy it all over when the install is done and I'm ready to move in? Can I optimize CFLAGS for my proc, kaby lake refresh, or would I need to rebuild @world later on to get them that specific?
Thanks! |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sun May 29, 2022 12:10 pm Post subject: |
|
|
bitter,
Welcome back. We know you would come, just not when :)
If your intention is to end up with a bare metal Gentoo, start from the beginning on the bare metal by going for dual boot.
Wen you install into a VM, the VM exposes 'fake' hardware to your install and you install for that hardware, not the real hardware.
That means the kernel and user space hardware drivers will change between the VM and bare metal, so its not just copy it over.
However, its possible to build a kernel that works in both places.
You could also build binary packages in the VM to save compile time later.
I usually do it the other way round. When I'm bringing up a new box that will be a KVM host, I copy the bare metal install into a KVM then fix the kernel.
That KVM become the one I clone for all the others. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3879
|
Posted: Sun May 29, 2022 12:51 pm Post subject: |
|
|
Also just use a chroot install from your main system while running.
Provided you have space on your disks to create the partitions.
I feel a VM is completely unnecessary. _________________
|
|
Back to top |
|
|
logrusx Veteran
Joined: 22 Feb 2018 Posts: 1532
|
Posted: Sun May 29, 2022 3:13 pm Post subject: |
|
|
NeddySeagoon wrote: |
Wen you install into a VM, the VM exposes 'fake' hardware to your install and you install for that hardware, not the real hardware. |
I wonder if I'm missing something, because I think you need not mess with hardware until just before the first boot. If the hardware is well known, no kernel that can boot at both places is necessary. The kernel can be prepared to boot on bare metal and if that does not work right away, one may always come back to the host OS or boot from an install cd and pick up from there. It'll save a lot of down time because most of the compiling will be done on the host OS be it a VM, although I don't get why is a VM necessary. Also a dist kernel can be used, which almost certainly will boot.
alamahant wrote: | Also just use a chroot install from your main system while running.
Provided you have space on your disks to create the partitions.
I feel a VM is completely unnecessary. |
bitter, if we're not missing something, like you're currently running Windows or other OS that cannot be used as a host, I too think that's the way to go.
Regards,
Georgi |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sun May 29, 2022 4:28 pm Post subject: |
|
|
logrusx
This is an arm64 server because its handy, but its the same on other arches.
Real hardware.
Code: | $ lspci
0000:00:00.0 PCI bridge: Ampere Computing, LLC eMAG PCI Express Root Port 0 (rev 04)
0002:00:00.0 PCI bridge: Ampere Computing, LLC eMAG PCI Express Root Port 2 (rev 04)
0004:00:00.0 PCI bridge: Ampere Computing, LLC eMAG PCI Express Root Port 4 (rev 04)
0006:00:00.0 PCI bridge: Ampere Computing, LLC eMAG PCI Express Root Port 6 (rev 04)
0006:01:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
0007:00:00.0 PCI bridge: Ampere Computing, LLC eMAG PCI Express Root Port 7 (rev 04)
0007:01:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
0007:02:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41) |
KVM fake hardware.
$ lspci
Code: | 00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
00:01.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.5 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.6 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.7 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
03:00.0 Communication controller: Red Hat, Inc. Virtio console (rev 01)
04:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
05:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon (rev 01)
06:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG (rev 01)
08:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01) |
In other virtualisation solutions, you can choose the fake hardware from a small list. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
logrusx Veteran
Joined: 22 Feb 2018 Posts: 1532
|
Posted: Sun May 29, 2022 4:35 pm Post subject: |
|
|
Neddy, still you need not mess with the hardware while emerging the system. It can be even crosscompiled. Just setup the right cflags, emerge the necessary packages, maybe tune some of the use flags, do all that and then deal with kernel and hardware. I don't see why you care what hw is in the VM. I've never cared what hardware was on bare metal when I installed my gentoos up until the moment I need to deal with the kernel and booting, which was generally one of the last steps.
Regards,
Georgi |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54237 Location: 56N 3W
|
Posted: Sun May 29, 2022 5:58 pm Post subject: |
|
|
logrusx,
You only care when you want to boot the system, be it the bare metal or the VM.
As you say, packages can be built and shared. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
figueroa Advocate
Joined: 14 Aug 2005 Posts: 2964 Location: Edge of marsh USA
|
Posted: Mon May 30, 2022 4:05 am Post subject: |
|
|
As others have indicated, assuming you are running a reasonably up-to-date and "normal" Linux of some kind, just open a terminal and start. No need to go through the complexities of a virtual machine middle man. _________________ Andy Figueroa
hp pavilion hpe h8-1260t/2AB5; spinning rust x3
i7-2600 @ 3.40GHz; 16 gb; Radeon HD 7570
amd64/23.0/split-usr/desktop (stable), OpenRC, -systemd -pulseaudio -uefi |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|