IBM Spectrum Scale 4.2 release supports node-at-a-time migration if the previous nodes in the cluster are running IBM Spectrum Scale 4.1.x release. IBM Spectrum Scale 4.1 nodes can coexist and interoperate with nodes that are running IBM Spectrum Scale 4.2.
I wanted to execute a shell script: -rwxr-x-x 1 root root 17234 Jun 6 18:31 createmgw3shelf6xIPNI1P.sh I tried to do a standard procedure, but I got this error:./createmgw3shelf6xIPNI1P.sh localhost 389 -l /opt/fews/sessions/AMGWM19/log/2013-37CLA-0 DEBUG cd/etc/opt/ldapfiles/ldifin;./createmgw3shelf6xIPNI1P.sh localhost 389 -l /opt/fews/sessions/AMGWM19/log/2013-37CLA-0.ERROR sh:./createmgw3shelf6xIPNI1P.sh: /bin/bash^M: bad interpreter: No such file or directory. What does it mean? I was doing this as the root user under the root group. Does it mean that the file does not have the correct permission for the root user? This isn't a permission issue, you aren't getting a message about permissions /bin/bash^M: bad interpreter: No such file or directory The script indicates that it must be executed by a shell located at /bin/bash^M.
There is no such file: it's called /bin/bash. Linux uses the to mark the end of a line, whereas Windows uses the two-character sequence CR LF.
Your file has Windows line endings, which is confusing Linux. Remove the spurious CR characters. You can do it with the following command: sed -i -e 's/ r$//' createmgw3shelf6xIPNI1P.sh.
Your file has DOS/Windows style line endings (CR LF), but on Unix-like systems only the LF control character is used as line break. The additional is shown encoded as ^M in your output.
You can also see it when you run cat -A createmgw3shelf6xIPNI1P.sh. To convert the line endings from DOS/Windows style to Unix style, there's a tool called dos2unix.
You install it using: sudo apt-get install dos2unix Then you can simply convert files' line endings in both ways using dos2unix FILENAME unix2dos FILENAME In your case, simply run this command below and the script file will be converted in-place: dos2unix createmgw3shelf6xIPNI1P.sh After that Bash should be able to interpret the file correctly.
I'm performing an AIX migration from 7.1 to 7.2 using the DVD Iso on a VIOS Virtual Library. At the point the migration starts, it fails with the following message: 0516-1775 varyonvg: Physical volumes hdisk0 and hdisk4 have identical PVIDs (00cdc0334d8c16a1). I know this is happening because the LUNS are provided via Netapp Storage and the SMS Menu doesn't support the Multipathing Software, so instead of a Disk with 4 paths, SMS interprets it as 4 disks with PVID conflict.
If I ask the Netapp team to kill all paths except one, it works. But I don't want to do it for all LPARs in my environment.
How do I avoid this without involving Netapp team? For FC & FCoE with AIX 7.2 (all revisions & SPs) supported with NetApp ONTAP from 8.2 (7-Mode & Cluster Mode) to ONTAP 9.4.
To be precise here are supported configurations:. Host Volume Manager: IBM AIX LVM or Oracle ASM. Host File System: GPFS, IBM AIX, RawIO, JFS, Oracle ASM.
Host Clustering: IBM PowerHA (HACMP), Oracle RAC. Host HBA: IBM HBA FC5270, IBM HBA FC5708, IBM HBA FCEN0H, IBM HBA FCEN0J, IBM HBA FCEN0K, IBM HBA FCEN0L It is recommended in your case to install.
Native AIX multipating (IBM AIX MPIO) supported with those versions of ONTAP. I would recommend you to configure MPIO properly instead of removing paths.
Here is the NetApp which will help you to check MPIO configuration. But if it is only temporary solution to remove paths, just lets say for upgrade purposes, and if you can't remove them from AIX, then you should ask your storage admins, they can use or to remove all the paths and leave to you only one path. It turns out there is no official support to NetApp MPIO during a DVD BOS Install.
I've opened a support case at IBM and they confirmed it. I was able though, to work around it using 'non-official' ways. First, start the installation processing booting from the DVD. At the 'Welcome to Base Operating System' menu, choose '3 Start Maintenance Mode for System Recovery': 3 Start Maintenance Mode for System Recovery Then in the 'Maintenance' Menu Select '3 Access Advanced Maintenance Functions': 3 Access Advanced Maintenance Functions There, remove all disks (paths), with the exception of one, which will be used to be migrated/installed. You can use the following script to remove the disks: disk=hdisk0 for pv in $(lspv grep -v '$disk ' cut -d ' ' -f1) do rmdev -dl $pv done exit # Going Back the Migration Menu This solves the first PVID problem, so after you are back to the migration menu, go back to the installation options and start the migration. Finally, the Migration will continue and all packages will be installed. But there will be another issue at the very end.
At the point when the boot section will be installed in the disk, the duplicated disks will be back, and this step will fail. But I was able to install it myself with the following steps: disk=hdisk0 #The Migrated/installed PV for pv in $(lspv grep -v '$disk ' cut -d ' ' -f1) do rmdev -dl $pv done importvg -Oy rootvg $disk mount /usr /etc/methods/cfg64 ln -fs /usr/lib/boot/unix64 /unix ln -fs /usr/lib/boot/unix64 /usr/lib/boot/unix mkboot -cd/dev/$disk cp -rp /usr/lpp/bos/instroot/etc/rc.teboot /etc/rc.teboot cp -rp /usr/lpp/bos/instroot/sbin/rc.boot /sbin/rc.boot bosboot -ad /dev/$disk Hope this helps someone in trouble like I was Regards.