Discussion:
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe it clean)?
(too old to reply)
Ant
2022-05-19 14:57:37 UTC
Permalink
Hello.

What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.

Thank you for reading and hopefully answering soon. :)
--
Quiet cooler week so far, but will today be slammy? Celtics have better get burned by Miami Heat!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )
The Natural Philosopher
2022-05-19 15:16:18 UTC
Permalink
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
my approach is these days to remove the old hard drive and install the
ssd, reinstall latest linux, and then install latest version of apps,
and roll any data across by reattaching the hard drive ab carefully
copying what you want...
--
“it should be clear by now to everyone that activist environmentalism
(or environmental activism) is becoming a general ideology about humans,
about their freedom, about the relationship between the individual and
the state, and about the manipulation of people under the guise of a
'noble' idea. It is not an honest pursuit of 'sustainable development,'
a matter of elementary environmental protection, or a search for
rational mechanisms designed to achieve a healthy environment. Yet
things do occur that make you shake your head and remind yourself that
you live neither in Joseph Stalin’s Communist era, nor in the Orwellian
utopia of 1984.”

Vaclav Klaus
Marco Moock
2022-05-19 15:54:45 UTC
Permalink
Post by Ant
What's the best and easy way to copy/move my old slow 320 GB SATA
HDD's updated Debian bullseye v11.3 to an old fast 115 GB SSD (going
to wipe it clean)? Yes, SSD is smaller but my Debian's installation
only uses about 8 GB. I installed Debian use the whole 320 GB drive.
I'll still be using the same 13 yrs. old PC.
This PC likely doesn't have UEFI, that makes it easier.
You need to reduce the size of you current partition so it fits on the
new installation. This also affects the size of the file system. You
can use GParted for that in a live system bootet from USB.
Then you can create a new msdos partition table on your SSD and then
clone the partition (not entire disk, so /dev/sdXN instead of just
/dev/sdX) with dd. You should do some research about the alignment of
that partition because if that is not correct the speed will be worse.
You should also specify the block size in dd.
Ant
2022-05-19 18:08:46 UTC
Permalink
Post by Marco Moock
Post by Ant
What's the best and easy way to copy/move my old slow 320 GB SATA
HDD's updated Debian bullseye v11.3 to an old fast 115 GB SSD (going
to wipe it clean)? Yes, SSD is smaller but my Debian's installation
only uses about 8 GB. I installed Debian use the whole 320 GB drive.
I'll still be using the same 13 yrs. old PC.
This PC likely doesn't have UEFI, that makes it easier.
You need to reduce the size of you current partition so it fits on the
new installation. This also affects the size of the file system. You
can use GParted for that in a live system bootet from USB.
Then you can create a new msdos partition table on your SSD and then
clone the partition (not entire disk, so /dev/sdXN instead of just
/dev/sdX) with dd. You should do some research about the alignment of
that partition because if that is not correct the speed will be worse.
You should also specify the block size in dd.
That sounds complex. :/
--
Quiet cooler week so far, but will today be slammy? Celtics have better get burned by Miami Heat!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )
Parodper
2022-05-19 16:06:10 UTC
Permalink
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
You can just copy everything with cp from a LiveCD and install a
bootloader on the new disk.
Tauno Voipio
2022-05-19 16:29:56 UTC
Permalink
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
First, you need to shrink the current installation to something
smaller than the new SSD.

Download GParted Live Bootable from GParted pages and install it
to a CD/DVD/USB stick (whichever you can boot from). It is quite
straightforward to shrink the only partition to say 10 GiB.
Check the last block number of the shrunk image to know how much
you need to copy in the next step.

If you can install the new SSD on the hardware together with the
old drive, just boot from the shrunk old drive and use e.g. dd
to copy enough of the old disk to cover the full image.

If everything has gone well, shut down the computer, change the
disks to the new disk only, and boot it. If the boot succeeds,
the next step is to expand the new partition and file system to
fill the SSD, using the bootable GParted agin.
--
-TV
Ant
2022-05-19 19:17:10 UTC
Permalink
FYI. My current HDD's df and /etc/fstab can be found in
https://pastebin.com/raw/zAJM6Npc.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
--
Quiet cooler week so far, but will today be slammy? Celtics have better get burned by Miami Heat!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )
Bit Twister
2022-05-19 19:52:06 UTC
Permalink
Post by Ant
FYI. My current HDD's df and /etc/fstab can be found in
https://pastebin.com/raw/zAJM6Npc.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
If it were I, I would boot a rescue cd,
https://www.system-rescue.org/
http://www.sysresccd.org/Download has instructions on coping to usb,
use gparted to format and label the new partition. I would then create
/src and /dest partition and mount to respective partitions, then use rsync
to copy /src to dest partition. I would then use
mousepad to change / mount point UUID to /dest's UUID.

The operation would be something like
mkdir /src
mkdir /dest
gparted to format ssd card. and note new partition UUID and /dev/xxxx
mount -t auto /dev/sdb1 /src
mount -t auto /dev/xxxxx /dest
rsync --delete -aAHSXxv /src/ /dest
mousepad /src/etc/fstab # to set / uuid to /dest uuid

umount /src /dest
reboot



Old install should boot up.
update-grub should rebuild grub menu to have new partition copy.
boot that device kernel
and run update-grub
and run grub-install /dev/xxxx
a reboot should let you pick your new install from the new install grub
menu.

Hope I got everything correct.
Ant
2022-05-23 01:08:35 UTC
Permalink
OK. I think I finally got it working now after reading everyone's suggestions (thanks!).

What I did from my memory over my weekend after many trials and errors:
1. Downloaded and burned https://downloads.sourceforge.net/gparted/gparted-live-1.4.0-1-amd64.iso and https://osdn.net/projects/clonezilla/downloads/76513/clonezilla-live-2.8.1-12-amd64.iso/ to two different CD-RW.
2. Made a back up of my original HDD's datas! Duh.
3. Booted gparted from the burned CD-RW. Resized my Seagate 320 GB HDD's Debian partition to about 106 GB. Went to 115 GB SSD, deleted all partitions, and made almost the whole drive as EXT4 FS. Made a new right extended 1 GB partition with a 1 GB swap partition.
4. Rebooted to my HDD to see if its Debian still works. It did. Thanks God!
5. Rebooted to Clonezilla's burned CD-RW and copied Seagate 320 GB HDD's Debian partition to SSD which took under four minutes since it was a small installation.
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
Post by Ant
FYI. My current HDD's df and /etc/fstab can be found in
https://pastebin.com/raw/zAJM6Npc.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
--
Dang computer problems! Quiet cooler week with the recent very light rain. It's like winter again! Celtics have better get burned by Miami Heat at the end of the eastern conference!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )
Bit Twister
2022-05-23 02:01:56 UTC
Permalink
Post by Ant
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
25.BX945
2022-05-23 03:53:43 UTC
Permalink
Post by Bit Twister
Post by Ant
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
Correct. You need to tweak 'fstab' AND the old drive. You can't
have two identically UUID identified drives in there. The
alternative - one I like - is to drop the UUID crap entirely
and create NAMED drives in fstab. It's easier to tell what's
what afterwards.

As for the actual xfer ... in theory 'dd' oughtta do it.
Attach your SSD, then "dd if=/dev/sda of=/dev/sdb bs=64k"
is kind of the basic. DO use 'lsblk' to MAKE SURE what
/dev/sd(?) the original and new drives are ! 'dd' is
sometimes nicknamed 'disk destroyer' for a REASON, YOU
have to get it right !

You can add "status=progress" to see what's going on with 'dd'.
One important note ... just because 'dd' says it's done does
NOT mean it's done ... you'll likely still see the drive light
blinking for a few minutes after. Apparently lots of data gets
stored in memory buffers and it takes a little while for all
those to be emptied onto your target drive. Get impatient and
you'll get an incomplete copy. No blinky light ? ASSUME an xtra
five minutes after 'dd' claims it's done.

THEN disconnect the HDD and reboot using the SSD and see if
it all works. If so, best if you use gparted from a linux
stick to totally clear the old HDD - including changing
its UUID, then reboot with it plugged in as normal. It'll
be detected as a new drive, probably /dev/sdb, and you can
go from there.

(Dual-booters .. you MIGHT run into problems because Winders
is The Great Preventer and might make extra effort to be sure
you can't get there from here. But, why would anyone want a
box with Winders on it ... ???)

In short, there's NO reason to lose your existing - perhaps
highly-customized - distro just to move to an SSD. I do
development stuff and have umpteen zillion apps and libraries
and custom settings. Losing those is a DISASTER - 24 hours+
to start from scratch assuming I can remember ALL the special
settings I've done.

Are SSDs better for everything ? MAYbe not. On the whole, do
not expect them to tolerate as many read/writes as a magnetic
drive. This might be important if you're running a big database
or anything else that does lots of re-indexing all the time.
Also, for security/disposal reasons, you can't blank 'em out
reliably with bleachbit or even 'dd' because of the wear-leveling
system built in. "Dispose" with a large hammer ... maybe one of
those big sparky stun-guns .........

If you're more an "average user" then SSDs oughtta be fine.
There are some deep-deep-down kernel-level tweaks you can also
make to further improve SSD performance. There are assumptions
made, that you have a magnetic drive, and some of that can be
adjusted to your advantage (gamers, do your research).
Ant
2022-05-23 04:11:50 UTC
Permalink
Post by 25.BX945
Post by Bit Twister
Post by Ant
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
Correct. You need to tweak 'fstab' AND the old drive. You can't
have two identically UUID identified drives in there. The
alternative - one I like - is to drop the UUID crap entirely
and create NAMED drives in fstab. It's easier to tell what's
what afterwards...
Why did they even use UUIDs? It's so confusing.
--
Dang computer problems! Quiet cooler week with the recent very light rain. It's like winter again! Celtics have better get burned by Miami Heat at the end of the eastern conference!
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://aqfl.net & http://antfarm.home.dhs.org.
/ /\ /\ \ Please nuke ANT if replying by e-mail.
| |o o| |
\ _ /
( )
Bit Twister
2022-05-23 04:20:17 UTC
Permalink
Post by Ant
Why did they even use UUIDs? It's so confusing.
Because multi-drive systems would not come up reliably with the same
/dev/sdxx values once in awhile.

You would have avoided all this "experience" had you used rsync instead
of dd.
Anssi Saari
2022-05-23 07:34:38 UTC
Permalink
Post by Ant
Why did they even use UUIDs? It's so confusing.
For the case where you'd unplug the old drive after cloning it's easier.
Don't need to edit /etc/fstab or any other place either. Grub will know
what the root partition is and where to resume from if hibernation is
used, likewise the kernel will know what the root file system is.

Why do you want both drives in the system anyways? After cloning I do
like to keep the old drive *around* for a while but not plugged into
anyhing. It serves as a cloneable backup if needed. After a while at
least my recently cloned HD goes into SER recycling since it's 2007
vintage.
25.BX945
2022-05-25 02:14:16 UTC
Permalink
Post by Ant
Post by 25.BX945
Post by Bit Twister
Post by Ant
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
Correct. You need to tweak 'fstab' AND the old drive. You can't
have two identically UUID identified drives in there. The
alternative - one I like - is to drop the UUID crap entirely
and create NAMED drives in fstab. It's easier to tell what's
what afterwards...
Why did they even use UUIDs? It's so confusing.
They thought it would be more "generic" - uniquely identifying
a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
names that DO tell you something, helps keep track, esp if
you have a box with lots of drives/partitions. I keep one
with EIGHT drives and 12 partitions ... need all the cues
I can get with that one. I don't WANT the UUID idea of
"uniquely identified", assigning human-readable names lets
me just slide in a replacement disk without fartin' around
very much. Fstab just sees "BakDrive3" and doesn't care if
it's the same physical disk as before.
David W. Hodgins
2022-05-25 02:53:20 UTC
Permalink
Post by 25.BX945
Post by Ant
Why did they even use UUIDs? It's so confusing.
The use of uuids were a solution to the problem where drive detection can't
be relied on to always be in the same order. The first drive that's fully
powered up becomes sda, even if it's usually the second drive, so sdb.
Post by 25.BX945
They thought it would be more "generic" - uniquely identifying
a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
names that DO tell you something, helps keep track, esp if
you have a box with lots of drives/partitions. I keep one
with EIGHT drives and 12 partitions ... need all the cues
I can get with that one. I don't WANT the UUID idea of
"uniquely identified", assigning human-readable names lets
me just slide in a replacement disk without fartin' around
very much. Fstab just sees "BakDrive3" and doesn't care if
it's the same physical disk as before.
You don't have to use the uuid. From my fstab ...
LABEL=x7b / ext4 defaults,noatime 1 1

I chose the label x7b as it's an x86_64 install of Mageia 7 (since upgraded to 8)
on /dev/sdb. Like the uuid, if you choose to use a label, it's up to you to ensure
it's unique. Use a label that means something to you, or let the system use the
generated uuid. Your choice.

Regards, Dave Hodgins
Bit Twister
2022-05-25 08:00:34 UTC
Permalink
Post by David W. Hodgins
Post by 25.BX945
Post by Ant
Why did they even use UUIDs? It's so confusing.
The use of uuids were a solution to the problem where drive detection can't
be relied on to always be in the same order. The first drive that's fully
powered up becomes sda, even if it's usually the second drive, so sdb.
Post by 25.BX945
They thought it would be more "generic" - uniquely identifying
a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
names that DO tell you something, helps keep track, esp if
you have a box with lots of drives/partitions. I keep one
with EIGHT drives and 12 partitions ... need all the cues
I can get with that one. I don't WANT the UUID idea of
"uniquely identified", assigning human-readable names lets
me just slide in a replacement disk without fartin' around
very much. Fstab just sees "BakDrive3" and doesn't care if
it's the same physical disk as before.
You don't have to use the uuid. From my fstab ...
LABEL=x7b / ext4 defaults,noatime 1 1
I chose the label x7b as it's an x86_64 install of Mageia 7 (since upgraded to 8)
on /dev/sdb. Like the uuid, if you choose to use a label, it's up to you to ensure
it's unique. Use a label that means something to you, or let the system use the
generated uuid. Your choice.
Yep, I use labels, even for swap. Except I use the swap partition label
because each format of swap wipes out the Medial label/UUID, which is usually
performed when installing a new OS.

I usually set the Partition label and media label to the same value.
Those usually become my mount points.

$ grep swap /etc/fstab
PARTLABEL=swap swap swap defaults,nofail 0 0

$ lsblk -o NAME,TYPE,FSTYPE,MOUNTPOINT,LABEL,PARTLABEL
NAME TYPE FSTYPE MOUNTPOINT LABEL PARTLABEL
sda disk
├─sda1 part ext4 mga6 mga6
├─sda2 part ext4 / mga8 mga8
├─sda3 part ext4 mga7 mga7
├─sda4 part ext4 cauldron cauldron
├─sda5 part ext4 /local local local
├─sda6 part ext4 /accounts accounts accounts
├─sda7 part ext4 /misc misc misc
├─sda8 part ext4 /spare spare spare
├─sda9 part ext4 /vmguest vmguest vmguest
└─sda10 part bios_grub
sdb disk
├─sdb1 part swap [SWAP] swap swap
├─sdb2 part ext4 bk_up bk_up
├─sdb3 part ext4 hotbu hotbu
├─sdb4 part ext4 cauldron_bkup cauldron_bkup
├─sdb5 part ext4 /myth myth myth
├─sdb6 part ext4 net_ins net_ins
└─sdb7 part ext4 net_ins_bkup net_ins_bkup
25.BX945
2022-05-26 03:55:04 UTC
Permalink
Post by David W. Hodgins
Post by Ant
Why did they even use UUIDs? It's so confusing.
The use of uuids were a solution to the problem where drive detection can't
be relied on to always be in the same order. The first drive that's fully
powered up becomes sda, even if it's usually the second drive, so sdb.
   They thought it would be more "generic" - uniquely identifying
   a disk. Alas such a scheme TELLS you NOTHING USEFUL. I like
   names that DO tell you something, helps keep track, esp if
   you have a box with lots of drives/partitions. I keep one
   with EIGHT drives and 12 partitions ... need all the cues
   I can get with that one. I don't WANT the UUID idea of
   "uniquely identified", assigning human-readable names lets
   me just slide in a replacement disk without fartin' around
   very much. Fstab just sees "BakDrive3" and doesn't care if
   it's the same physical disk as before.
You don't have to use the uuid. From my fstab ...
LABEL=x7b / ext4 defaults,noatime 1 1
I chose the label x7b as it's an x86_64 install of Mageia 7 (since upgraded to 8)
on /dev/sdb. Like the uuid, if you choose to use a label, it's up to you to ensure
it's unique. Use a label that means something to you, or let the system use the
generated uuid. Your choice.
Regards, Dave Hodgins
Indeed. Correctly using LABEL generally solves the sda/sdb/sdc thing.
You DO need to actually label the partitions though. UUID or part
label, both "uniquely identify" - but the latter is far more human
readable.

I am aware of the "problem" mentioned. It used to be an issue with
OpenSuse about ten years ago - you might have to use the emergency
terminal to tweak fstab. Have not seen it with Debian-based distros
and certainly not lately. I use LABEL in boxes with 4-8 disks pretty
regularly. I think they smartened-up the kernal somehow ..

Bit Twister
2022-05-23 04:14:43 UTC
Permalink
Post by 25.BX945
Post by Bit Twister
Post by Ant
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
You can have both. They just have to have different UUIDs, updated /etc/fstab and gurub update/installed.
Correct. You need to tweak 'fstab' AND the old drive. You can't
have two identically UUID identified drives in there. The
alternative - one I like - is to drop the UUID crap entirely
and create NAMED drives in fstab. It's easier to tell what's
what afterwards.
Very true and will also have the same problem if NAMED drives have the
same value. I too moved to using labels instead of UUIDs.
Post by 25.BX945
In short, there's NO reason to lose your existing - perhaps
highly-customized - distro just to move to an SSD. I do
development stuff and have umpteen zillion apps and libraries
and custom settings. Losing those is a DISASTER - 24 hours+
to start from scratch assuming I can remember ALL the special
settings I've done.
Hehe, I always do clean installs. As for custom settings you either
keep a log on all changes with before/after settings for each file.
OR just write scrips to automate making your changes. Only costs me about
an hour for my scripts to make my changes.
Tauno Voipio
2022-05-23 14:36:28 UTC
Permalink
Post by Ant
OK. I think I finally got it working now after reading everyone's suggestions (thanks!).
1. Downloaded and burned https://downloads.sourceforge.net/gparted/gparted-live-1.4.0-1-amd64.iso and https://osdn.net/projects/clonezilla/downloads/76513/clonezilla-live-2.8.1-12-amd64.iso/ to two different CD-RW.
2. Made a back up of my original HDD's datas! Duh.
3. Booted gparted from the burned CD-RW. Resized my Seagate 320 GB HDD's Debian partition to about 106 GB. Went to 115 GB SSD, deleted all partitions, and made almost the whole drive as EXT4 FS. Made a new right extended 1 GB partition with a 1 GB swap partition.
4. Rebooted to my HDD to see if its Debian still works. It did. Thanks God!
5. Rebooted to Clonezilla's burned CD-RW and copied Seagate 320 GB HDD's Debian partition to SSD which took under four minutes since it was a small installation.
6. Rebooted to SSD, but it still went to my HDD! So, I found out it was because of the confusing UUIDs from Grub.
7. Physically disconnected HDD's SATA cable and retried. It worked. I was hoping to keep both connected just in case. :(
Post by Ant
FYI. My current HDD's df and /etc/fstab can be found in
https://pastebin.com/raw/zAJM6Npc.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
Your filesystem (EXT4) on the SSD may still be smaller than the
partition it is in. You can use the GParted CD to check and maybe
resize it.
--
-TV
Bobbie Sellers
2022-05-19 19:32:34 UTC
Permalink
Why post to so many newsgroups. Seems Trollish to me.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
Do a fresh install and copy back the information you wish to retain.
The Natural Philosopher
2022-05-20 11:06:14 UTC
Permalink
Post by Bobbie Sellers
Why post to so many newsgroups. Seems Trollish to me.
Post by Ant
Hello.
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive. I'll still be
using the same 13 yrs. old PC.
Thank you for reading and hopefully answering soon. :)
Do a fresh install and copy back the information you wish to retain.
Yes!

Experience suggests that if this sort of thing is something you don't do
every day, this is faster than 'upgrading in place'
--
Gun Control: The law that ensures that only criminals have guns.
James Moe
2022-05-20 04:04:03 UTC
Permalink
Post by Ant
What's the best and easy way to copy/move my old slow 320 GB SATA HDD's
updated Debian bullseye v11.3 to an old fast 115 GB SSD (going to wipe
it clean)? Yes, SSD is smaller but my Debian's installation only uses
about 8 GB. I installed Debian use the whole 320 GB drive.
Here are my notes for transferring system disks.

Moving System Partitions or Volumes
Before booting create a list of the /dev/sdXn devices of interest.
sdXn = sdb2, for instance.
Boot a Rescue System or a “Live CD.”
1. Verify the volumes are as expected by mounting and inspecting them.
cd /
mkdir /mnt/dev-old
mkdir /mnt/dev-new
mount /dev/sdXn /mnt/dev-old # the volume to replace or move
mount /dev/sdYn /mnt/dev-new # the target volume
2. Copy the data from old to new.
cd /mnt/dev-old
cp -a . /mnt/dev-new
3. Unmount the volume.
umount /mnt/dev-old
umount /mnt/dev-new
Repeat 1., 2., and 3. for each volume.
4. Clean up.
rmdir /mnt/dev-old
rmdir /mnt/dev-new
5. Create the build environment.
cd /

# Only if /usr or /boot are separate volumes
mkdir /mnt/usr
mkdir /mnt/boot
mount /dev/sdUn /mnt/usr
mount /dev/sdBn /mnt/boot

mount /dev/sdYn /mnt # Mount the root
mount --bind /sys /mnt/sys
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
6. Modify /mnt/etc/fstab as required.
7. Build the boot loader
chroot /mnt
mkinitrd
8. If moving the root volume:
- Run yast::Boot Loader
- Modify "Boot Loader Location" as needed. Usually "Boot from Partition" is okay.
- Verify "Set Active Flag" and "Write generic boot code to MBR" are set.
- Save
9. All done. Restart with the new configuration.
exit
shutdown -r now
--
James Moe
jmm-list at sohnen-moe dot com
Think.
Loading...