Start a Conversation

Unsolved

Closed

F

4 Posts

401

March 9th, 2023 14:00

Old issue, A100_resize_root script failing

Please don't say call support, we don't have it on this cluster.  I need to revive a A100 that has been sitting on the shelf from a offline cluster. It's the right OneFs version just needs to push a few patches which I have but one of those patches needs the root to be 2 gig and this unit was not resized.  So about a year ago we did this across several clusters and I have the A100_root_resize.sh script that changes root from 1 gig to 2 gig. when I run it, It's of course doing a FAILURE.  it's complaining that the da1 assignment index 13 already  exists so failes.

Product: A100-1U-Dual-256GB-4x1GE-2x10GE SFP+-600GB
command: gpart add -t freebsd-ufs -s 2G -b 18130944 -l root0 -i 13 da1
gpart: index '13': File exists
status : 1
FAILURE

when you dump via gpart I get

gpart show -l
=> 34 585937433 da1 GPT (279G)
34 128 1 boot (64K)
162 1 3 bootdiskid (52B)
163 1885 - free - (943K)
2048 104857 2 scratch (51M)
106905 1639 - free - (820K)
108544 2097152 4 root0 [isi_active,bootme] (1.0G)
2205696 2097152 5 root1 (1.0G)
4302848 2097152 6 var0 (1.0G)
6400000 2097152 7 var1 (1.0G)
8497152 1048576 8 journal-backup (512M)
9545728 65536 9 kernelsdump (32M)
9611264 65536 10 mfg (32M)
9676800 4194304 11 var-crash (2.0G)
13871104 4194304 12 kerneldump (2.0G)
18065408 65536 13 keystore (32M)
18130944 567806523 - free - (271G)

=> 34 585937433 da2 GPT (279G)
34 128 1 boot (64K)
162 1 3 bootdiskid (512B)
163 1885 - free - (943K)
2048 104857 2 scratch (51M)
106905 1639 - free - (820K)
108544 2097152 4 root0 [isi_active,bootme] (1.0G)
2205696 2097152 5 root1 (1.0G)
4302848 2097152 6 var0 (1.0G)
6400000 2097152 7 var1 (1.0G)
8497152 1048576 8 journal-backup (512M)
9545728 65536 9 mfg (32M)
9611264 65536 10 keystore (32M)
9676800 576260667 - free - (275G)

So the script is coded to use 13 and 14 to create the larger root0 and root1 mirror the data and then remove the old ones but sadly 13 already exists.  I don't see the script changing anything else to say root is not 13&14 but have not dug thru the whole deep back end.

What I do know is that on all my other units da1 has 1-12 and one of the items listed above is located on da2 instead of da1.

Can I tweak the script to use 14-15 and get away with it?  Anybody have the old KB article and can provide?  I have the script and now searching for it lists it as restricted and to call support which was NOT an issue a year ago but seems we didn't save that one unlike the other 14 we needed last year.

Again don't have support so NOT an option.

 

FYI: this is the gpart dump from any other A100.

=> 34 585937433 da1 GPT (279G)
34 128 1 boot (64K)
162 1 3 bootdiskid (512B)
163 1885 - free - (943K)
2048 104857 2 scratch (51M)
106905 4195943 - free - (2.0G)
4302848 2097152 6 var0 (1.0G)
6400000 2097152 7 var1 (1.0G)
8497152 1048576 8 journal-backup (512M)
9545728 65536 9 kernelsdump (32M)
9611264 65536 10 mfg (32M)
9676800 4194304 11 kerneldump (2.0G)
13871104 65536 12 keystore (32M)
13936640 4194304 - free - (2.0G)
18130944 4194304 13 root0 (2.0G)
22325248 4194304 14 root1 [isi_active,bootme] (2.0G)
26519552 559417915 - free - (267G)

=> 34 585937433 da2 GPT (279G)
34 128 1 boot (64K)
162 1 3 bootdiskid (512B)
163 1885 - free - (943K)
2048 104857 2 scratch (51M)
106905 4195943 - free - (2.0G)
4302848 2097152 6 var0 (1.0G)
6400000 2097152 7 var1 (1.0G)
8497152 1048576 8 journal-backup (512M)
9545728 65536 9 mfg (32M)
9611264 4194304 10 var-crash (2.0G)
13805568 65536 11 keystore (32M)
13871104 4259840 - free - (2.0G)
18130944 4194304 13 root0 (2.0G)
22325248 4194304 14 root1 [isi_active,bootme] (2.0G)
26519552 559417915 - free - (267G)

Seems that on this one var-crash was put on da1 instead of da2 then the keystore was created on both.  so the index numbering is off.  Looking for a fix be it manual or scripted.

 

Thanks in advance.

 

3 Apprentice

 • 

593 Posts

March 12th, 2023 14:00

@fskrot ,  have you tried just joining / adding the A100 to cluster?  It should automatically reimage with whatever version  & patches is on the existing cluster.

4 Posts

March 13th, 2023 03:00

The a100 is same OneFS have removed and added but no luck. If you read and look carefully the script for re-sizing fails as it’s not as expected (Came this way years ago). These are the actual FreeBSD boot partitions from what I can tell.

4 Posts

March 15th, 2023 02:00

So without anybody else chiming in I edited the script and told it to use index 14 & 15 instead of 13 & 14 and of course the script worked, unit was rebooted and works.  Just don't know what if anything might fail in the future.

No Events found!

Top