Monday, August 22, 2016

Linux Interview Question - Part 5

1) /etc/passwd file was deleted and how to recover the file ?
sudo cp /etc/passwd- /etc/passwd
sudo chmod 644 /etc/passwd

Edit:(another method "I don't try that before bu i think it should work also")

Start GRUB on boot (press ESC while booting)
Press e over (recovery mode)
Press e over the line beginning with kernel
Press Space bar and enter "init=/bin/bash"
Press enter
Press b
At command prompt type: "cp /etc/passwd- /etc/passwd"
Reboot to GRUB again
Press e over (recovery mode)
Press e over the line beginning with kernel
Press Space bar and enter "init=/bin/bash"
Press enter
Press b
At command prompt type "mount -o remount,rw /"
Type "passwd YOURUSERNAMEHERE" (IF you don't know your username type "ls /home" (that is a Lower case L and lower case S) for a list of users)
Enter new password at prompt
Reboot to normal boot.

#########################END ####################

2) Ethernet bonding feature and how to set.

Configuring Network Interface Bonding

Network interface bonding (also known as port trunking, channel bonding, link aggregation, NIC teaming, among other names) combines multiple network connections into a single logical interface. A bonded network interface can increase data throughput by load balancing or can provide redundancy by allowing failover from one component device to another. By default, a bonded interface appears like a normal network device to the kernel, but it sends out network packets over the available slave devices by using a simple round-robin scheduler. You can configure bonding module parameters in the bonded interface's configuration file to alter the behavior of load-balancing and device failover.

Basic load-balancing modes (balance-rr and balance-xor) work with any switch that supports EtherChannel or trunking. Advanced load-balancing modes (balance-tlb and balance-alb) do not impose requirements on the switching hardware, but do require that the device driver for each component interfaces implement certain specific features such as support for ethtool or the ability to modify the hardware address while the device is active. For more information see /usr/share/doc/iputils-*/README.bonding.

You can use the bonding driver that is provided with the Oracle Linux kernel to aggregate multiple network interfaces, such as eth0 and eth1, into a single logical interface such as bond0.

To create a bonded interface:

Create a file named ifcfg-bondN in the /etc/sysconfig/network-scripts directory, where N is number of the interface, such as 0.

Edit the contents of ifcfg-bondN to be similar to the configuration settings for an Ethernet interface except that DEVICE is set to bondN rather than ethn, for example:

DEVICE="bond0"
IPADDR=192.168.1.121
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=Ethernet
BONDING_OPTS="bonding parameters separated by spaces"
The BONDING_OPTS setting is optional, unless you need to pass parameters to the bonding module, for example, to specify the load balancing mechanism or to configure ARP link monitoring. For more information, see /usr/share/doc/iputils-*/README.bonding.

For each interface that you want to bond, edit its ifcfg-interface file so that it contains MASTER=bondN and SLAVE entries, for example:

DEVICE="eth0"
NAME="System eth0"
IPADDR=192.168.1.101
NETMASK=255.255.255.0
BROADCAST=192.0.2.255
NM_CONTROLLED="yes"
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
PEERDNS=yes
PEERROUTES=yes
MASTER=bond0
SLAVE
Create the file /etc/modprobe.d/bonding.conf, so that it contains an entry for each bonded interface, for example:

alias bond0 bonding
The existence of this file ensures that the kernel loads the bonding module is loaded when you bring up the bonded interface. All bonded interfaces that you configure require an entry in this file.

If the component interfaces are up, bring them down, and then bring up the bonded interface:

# ip link set eth0 down
# ip link set eth1 down
# ip link set bond0 up

#########################END ####################

3) soft mount & hard mount in nfs

Difference Between NFS Soft And Hard Mount With Example

Using NFS protocol, the NFS client can mount the filesystem existing on a NFS server, just like a local file system. For example, you will be able to mount “/home” directory of host.nfs_server.com to your client machine as follows.

# mount host.nfs_server.com:/home /techhome

The directory “/techhome” should be created in your machine to hold the NFS partition. This NFS mount can be done in either as a “soft mount” or as a “hard mount”. These mount options defines how the NFS client should handle NFS server crash/failure. In this article we will see the difference between soft and hard mounts.

1. Soft Mount

Suppose you have mounted a NFS filesystem using “soft mount” . When a program or application requests a file from the NFS filesystem, NFS client daemons will try to retrieve the data from the NFS server. But, if it doesn’t get any response from the NFS server (due to any crash or failure of NFS server), the NFS client will report an error to the process on the client machine requesting the file access. The advantage of this mechanism is “fast responsiveness” as it doesn’t wait for the NFS server to respond. But, the main disadvantage of this method is data corruption or loss of data. So, this is not a recommended option to use.

mount -o rw,soft host.nf_server.com/home /techhome

2. Hard Mount


If you have mounted the NFS filesystem using hard mount, it will repeatedly retry to contact the server. Once the server is back online the program will continue to execute undisturbed from the state where it was during server crash. We can use the mount option “intr” which allows NFS requests to be interrupted if the server goes down or cannot be reached. Hence the recommended settings are hard and intr options.

mount -o rw,hard,intr host.nf_server.com/home /techhome

#########################END ####################

3) NFS hard mounts vs soft mounts

Several times over the past few years, I have had the situation where I had been using an NFS server when something happened, the client lost connection with the server, and the entire system froze.

Recently I found out that this is by design. Apparently this is a result of using hard mounts, which are the default in most cases. This is what I understand about how hard mounts and soft mounts work:

a) Hard mounts

Advantages: If the connection is lost and it is a minor problem, and you are ok with having all your NFS clients have frozen applications, and possibly have their entire systems frozen and useless until the NFS server comes back online, you may not lose any data when the NFS share becomes available again.

Disadvantages: If an application freezes and you can't bring up the NFS server, your only option appears to be to kill that application, even if it could have easily survived write errors. Also, a simple NFS share where you dump files once in a while, and that is completely unnecessary for the system to function, can freeze the entire system indefinitely if the server loses connection to the client.

b) Soft mounts

Advantages: They work as expected (for the most part) - if the server fails, the application gets an I/O error, and keeps going.

Disadvantage: According to the nfs man page, and every other source on the internet, this leads to silent data corruption because applications get told prematurely that a write was successful when in fact the data is still in cache, unable to be written to the NFS server that we just lost connection to.


What I don't quite understand is this:

1) How can the most widely accepted solution to using NFS (as far as I can tell) be to use NFS hard mounts, and if the server ever dies, kill the application that is frozen?

Example: If I had been working on a gedit document for 30 minutes and wanted to save it when the NFS mount was down, this is what would happen on:

Soft mounts - gedit would get an I/O error and asks you where else you want to save your work

Hard mounts - gedit would freeze indefinitely, forcing you to kill it and lose all your data

#########################END ####################

4) how to export a file in network

NFS (Network file system) is both a protocol and file system for accessing and sharing file systems across a computer network using UNIX and Linux. NFS v4 is used in modern Linux distributions. It offers performance improvements, mandates strong security, and introduces a stateful protocol etc.


How do I export a directory with NFS?


In order to export or share directory called /data2, you need to edit a file called/etc/exports. The file /etc/exports serves as the access control list for file systems which may be exported to NFS clients.:
# vi /etc/exports
Add config directive as follows:
/data2 *(rw,sync)
Each line contains an export point and a whitespace-separated list of clients allowed to mount the file system at that point. Each listed client may be immediately followed by a parenthesized, comma-separated list of export options for that client.
Where,
  • rw – Allow both read and write requests on /data2 NFS volume
  • sync – Reply to requests only after the changes have been committed to stable storage
Save and close the file. Restart the nfs service:
# /etc/init.d/nfs restart

NFS client configuration

Client computer need to mount file system using mount command or /etc/fstab file, enter:
# mkdir /mnt/nfs
# mount -t nfs4 nfsserver-name-or-ip:/data2 /mnt/nfs
Read the man page for more configuration options:
$ man exports
#########################END ####################


No comments:

Post a Comment