Note: This is an RHCE 7 exam objective.
In this tutorial, the NFS server is called nfsserver.example.com and the NFS client nfsclient.example.com.
NFS Server Configuration
Install the file-server package group:
# yum groupinstall -y file-server
Add a new service to the firewall:
# firewall-cmd --permanent --add-service=nfs success
Note: NFSv4 is the version used at the exam and doesn’t need any extra firewall configuration. However, beyond the exam objectives, if you plan to use NFSv3, you will also need to run these commands:
# firewall-cmd --permanent --add-service=mountd # firewall-cmd --permanent --add-service=rpc-bind
Reload the firewall configuration:
# firewall-cmd --reload success
Activate the NFS services at boot:
# systemctl enable rpcbind nfs-server
Note: The nfs-idmap/nfs-idmapd (changes happened with RHEL 7.1) and nfs-lock services are automatically started by the nfs-server service. nfs-idmap/nfs-idmapd is required by NFSv4 but doesn’t allow you any UID/GID mismatches between clients and server. It is only used when setting ACL by names or to display user/group names.
All permission checks are still done with the UID/GID used by the server.
Start the NFS services:
# systemctl start rpcbind nfs-server
Note1: By default, 8 NFS threads are used (RPCNFSDCOUNT=8 in the /etc/sysconfig/nfs file). This should be increased in a production environment to at least 32.
Note2: Optionally, to enable SELinux Labeled NFS Support, edit the /etc/sysconfig/nfs file and paste the following line (source): RPCNFSDARGS=”-V 4.2″
Create directories to export and assign access rights:
# mkdir -p /home/tools # chmod 0777 /home/tools # mkdir -p /home/guests # chmod 0777 /home/guests
Assign the correct SELinux contexts to the new directories:
# yum install -y setroubleshoot-server # semanage fcontext -a -t public_content_rw_t "/home/tools(/.*)?" # semanage fcontext -a -t public_content_rw_t "/home/guests(/.*)?" # restorecon -R /home/tools # restorecon -R /home/guests
Note: The public_content_rw_t context is not the only available, you can also use the public_content_ro_t (only read-only) or nfs_t (more limited) contexts according to your needs.
Check the SELinux booleans used for NFS:
# semanage boolean -l | egrep "nfs|SELinux" SELinux boolean State Default Description xen_use_nfs (off , off) Allow xen to use nfs virt_use_nfs (off , off) Allow virt to use nfs mpd_use_nfs (off , off) Allow mpd to use nfs nfsd_anon_write (off , off) Allow nfsd to anon write ksmtuned_use_nfs (off , off) Allow ksmtuned to use nfs git_system_use_nfs (off , off) Allow git to system use nfs virt_sandbox_use_nfs (off , off) Allow virt to sandbox use nfs logrotate_use_nfs (off , off) Allow logrotate to use nfs git_cgi_use_nfs (off , off) Allow git to cgi use nfs cobbler_use_nfs (off , off) Allow cobbler to use nfs httpd_use_nfs (off , off) Allow httpd to use nfs sge_use_nfs (off , off) Allow sge to use nfs ftpd_use_nfs (off , off) Allow ftpd to use nfs sanlock_use_nfs (off , off) Allow sanlock to use nfs samba_share_nfs (off , off) Allow samba to share nfs openshift_use_nfs (off , off) Allow openshift to use nfs polipo_use_nfs (off , off) Allow polipo to use nfs use_nfs_home_dirs (off , off) Allow use to nfs home dirs nfs_export_all_rw (on , on) Allow nfs to export all rw nfs_export_all_ro (on , on) Allow nfs to export all ro
Note1: The State column respectively shows the current boolean configuration and the Default column the permanent boolean configuration.
Note2: Here we are interested in the nfs_export_all_rw, nfs_export_all_ro and potentially use_nfs_home_dirs booleans.
Note3: The nfs_export_all_ro boolean allows files to be shared through NFS in read-only mode but doesn’t restrict them from being used in read-write mode. It’s the role of the nfs_export_all_rw boolean to allow read-write mode.
If necessary, assign the correct setting to the SELinux booleans:
# setsebool -P nfs_export_all_rw on # setsebool -P nfs_export_all_ro on # setsebool -P use_nfs_home_dirs on
Edit the /etc/exports file and add the following lines with the name (or IP address) of the client(s):
/home/tools nfsclient.example.com(rw,no_root_squash) /home/guests nfsclient.example.com(rw,no_root_squash)
Note: Please, don’t put any space before the opening parenthesis, this would completely change the meaning of the line!
Export the directories:
# exportfs -avr exporting nfsclient.example.com:/home/guests exporting nfsclient.example.com:/home/tools # systemctl restart nfs-server
Note: This last command shouldn’t be necessary in the future. But, for the time being, it avoids rebooting.
Check your configuration:
# showmount -e localhost Export list for localhost: /home/guests nfsclient.example.com /home/tools nfsclient.example.com
Note: You can test what is exported by the NFS server from a remote client with the command showmount -e nfsserver.example.com but you first need to stop Firewalld on the NFS server (or open the 111 udp and 20048 tcp ports on the NFS server).
NFS Client Configuration
On the client side, the commands are:
# yum install -y nfs-utils # mount -t nfs nfsserver.example.com:/home/tools /mnt
Additional Resources
The GeekDiary website provides a tutorial about Configuring a NFS server and NFS client.
The Arch Linux wiki offers a page about Troubleshooting NFS.
Following this manual
[root@rhel7-client ~]# mount -t nfs rhel7-server.local:/home/tools /mnt/
[root@rhel7-client ~]# ll /mnt/
total 0
[root@rhel7-client ~]# echo 123 > /mnt/file1
hangs for too long
why?
Check the connectivity between client and server with ping and IP addresses,
check the DNS client and server configuration with ping and hostnames,
check the /etc/exports file and the associated syntax (no space before the opening parenthesis), re-export the share (# exportfs -avr),
check the NFS server firewall configuration (with nmap if necessary from the client),
check the directory permission of the exported share,
put SELinux in Permissive mode and retry (# setenforce Permissive),
reboot the NFS server and check that all the required NFS services are running (# systemctl is-active nfs-lock; etc).
be prepared to open 111 udp/tcp and whatever port you assign to Mountd 20048 udp/tcp (as configured in /etc/sysconfig/nfs) as “showmount -e servername” should work by doing “ls /net/servername” if you have automount turned on. Although NFSv4 does not actually require these ports for operation “showmount” does.
After a little search … grep.. these two services are located in the firewall default xml.
20048 udp/tcp service is know as mountd
111 udp/tcp service is know as rpcbind
firewall-cmd –add-service=mountd –permanent
firewall-cmd –add-service=rpc-bind –permanent
From the RedHat webpage:
Enable the services at boot time
# systemctl enable nfs-server
# systemctl enable rpcbind
# systemctl enable nfs-lock <– In RHEL7.1 (nfs-utils-1.3.0-8.el7) this does not work (No such file or directory). it does not need to be enabled since rpc-statd.service is static.
# systemctl enable nfs-idmap <– In RHEL7.1 (nfs-utils-1.3.0-8.el7) this does not work (No such file or directory). it does not need to be enabled since nfs-idmapd.service is static.
You are perfectly right.
two of the commands at the start of the tutorial are broken.
# systemctl enable nfs-server
# systemctl enable nfs-lock
both fail with the mysterious error command. The cause seems to be a change to how links are made in the packages and systemctl doesn’t work with links to files…. at least the short version from Googling. The services do actually start with the start command listed, but it kind of breaks the tutorial. Also, a comment to how we can check the systemctl status for these would be helpful.
Thanks for the great tutorials!!!
I have updated the tutorial.
If it takes too long or seems like a hang while creating files or folders in NFS shares then the reason could be the ownership issue. Try changing the ownership of the nfsshare at server to nfsnobody as :
chown -R nfsnobody:nfsnobody /nfsshare
chmod -R g+rxws /nfsshare
Now try creating…
Interesting. Thanks.
you have configured to /etc/export for specific clients (“client1”, “client2”), but later in output of “showmount -e localhost” we see asterisks (*). Shouldn’t “client1” and “client2” be there?
You are perfectly right. I updated the tutorial. Thanks.
Hi,
When I tried to start nfs-lock and nfs-idmap services, there are problems.
[root@server1 ~]# systemctl enable nfs-lock
Failed to issue method call: No such file or directory
[root@server1 ~]# systemctl enable nfs-idmap
Failed to issue method call: No such file or directory
But the nfs service was working well.
I don’t know why there is nfs-lock and nfs-idmap services (I cant start/stop) but I can’t enable them.
Since RHEL 7.1, the NFS configuration has changed. I finally updated the tutorial to take this evolution into account.
Why did you decide to use no_root_squash option for the /home/guests share? I imagine it should never be accessed by root on client…
It’s simply easier. I perfectly understand that from a security perspective it’s definitively not an option to use in a production environment.
When using automount for ldap user’s home directories, do those home directories need to exist on the servers side before we log into the client as the ldap user? Or will the home directory automatically be created on the nfs share when ldapuser logs on the client side?
By default the user’s home directories should be created. However, as explained in the tutorial about configuring a system to use an existing LDAP directory, it is possible to ask the creation at the first connection through the yum install oddjob-mkhomedir and authconfig –enablemkhomedir –update commands.
Firstly, thank you CertDepot for this site!!. This site is an awesome learning resource!
Regarding the problem I am having with Auto mount of home directories:
If I have the enablemkhomedir option configured via authconfig, then the dir gets created, But only on the local file system of the client machine. It does not show up on the nfs share on the server side.
If the ldap user’s home dir already exists on the nfs server then automount mounts it just fine on the client and any files I create in the dir on the client side also show up on the nfs share on the server side.
Any idea what other options/settings I should look into?
Btw I created my LDAP/NFS server using that tutorial that you linked in your response.
Thank you for the kind words.
Concerning the enablemkhomedir option, I didn’t explore all the intricacies. I don’t know how to change the behavior that you saw (creation of the home directory on the client side instead of the server side). I’m very interested in your future findings on this subject.
If I come across a solution in the coming months, I will tell you.
Actually the behavior where the home directory was getting created on the client side was totally my mistake, i forgot to restart autofs, dohh! :). So basically oddjob-mkhomedir was not even attempting to create the dir on the nfs share.
Once I started autofs I ended up getting the same error I mentioned earlier, i.e oddjobs gets permission denied when attempting to create home dir. I am pretty sure that all requirements for dir permissions, SELinux booleans and nfs mount options are set correctly, so this permissions denied is really perplexing. Maybe there is be a bug in the oddjobs-mkhomedir package. I will play with it some more down the line, moving on to other topics for now since my exam is scheduled for end of week.
My workaround for now is to use direct maps in autofs. Oddjobs has no problems creating home dirs on the nfs share for first time ldapuser logins. On the exam, if they want us to use indirect maps then I hope they already have existing home dirs on the nfs share 🙂
Look forward to hearing from your side on this issue if you come across any findings. Thanks again!
Concerning the RHCSA exam, there is no objective dealing with making home directory at first login on NFS that I’m aware of. It’s a more advanced topic.
So, don’t spend too much time on this. 😉
Only way I can get automount of home directory to work for first time ldapuser logins is if I am using direct map.
When using indirect map i get the following error
On client side: oddjob-mkhomedir[15230]: error creating /home/ldap/ldapuser2: Permission denied
On Server side:Aug 01 20:26:56 labipa.example.com rpc.mountd[13603]: can’t stat exported dir /home/ldap/ldapuser2: No such file or directory
Interesting. Thanks.
I thought that we should use NFS v4 – which is the only version that supports kerberos , if I’m not wrong.
Am I with a wrong impression ?
I think NFSv4 is the default version if you don’t mention anything. After, you can specify NFSv4.1 or NFSv4.2.
On RHEL 7, by default, mount will use NFSv4 with “mount -t nfs”. If the server does not support NFSv4, the client will automatically step down to a version supported by the server.
Yes, but if you want to enable NFSv4 only is different than NFSv3 + NFSv4. So, if on the exam nothing is mentioned about the version, should I assume NFSv4 only?
The opposite – you shouldn’t assume things on the exam. If it’s not mentioned, then use the settings that come with the OS.
It seems with this guide it provides both ver3 and ver4 NFS server 🙂
The only think I have installed is:”nfs-utils nfs4-acl-tools”
Yes.
I was playing with NFS and I found out in the web that it is possible to limit the nfs version by editing “/etc/sysconfig/nfs” and adding the following:
RPCNFSDCOUNT=” –no-nfs-version ”
Of course a restart of the server is needed.
Don’t use this RPCNFSDCOUNT option that defines the number of threads used by the nfs daemons.
Use the RPCNFSDARGS option (for example: RPCNFSDARGS=”-V 4.2″) in the /etc/sysconfig/nfs file (see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/sect-Managing_Confined_Services-NFS-Configuration_Examples.html).
Hi CertDepot,
In my nfs server, when executing the showmount -e localhost in server success (shared dir displayed), but when I execute on the client side error occurs no host to route, but the shared dir can be mounted by client both read and write can be done. Is this behavior normal on RHEL 7.0?
Thanks for your reply
Yes, this is the normal behavior.
If you want to use the showmount command to test the shares exported by an NFS server from a remote client, you need to stop Firewalld on the NFS server or open the 111 udp and 20048 tcp ports on the NFS server.
If you are using a local dns, check the setup. You want to make sure there is no conflict, and that all relevant services have the proper dns settings (use ping). An nmap port scan is worth doing.
This doesn’t work for me unless on the server I do this in addition to nfs:
firewall-cmd –add-service rpc-bind –permanent
firewall-cmd –add-service mountd –permanent
firewall-cmd –reload
Both rpc-bind and mountd services are only required by NFSv3, and can be skipped when setting up an NFSv4 server. Unless you are setting up an NFSv3 server, then your problem lies some other place.
OK, with more troubleshooting it looks like it’s just the showmount command, and I see that’s already been addressed here.
So the firewall services mountd and rpc-bind allow port 111 and 20048 as it has already been identified.
Thanks
Anyone having troubles installing file-server package?
Other normal yum commands work, but not file-server. I guess I can try the required services find the individual packages for install. I haven’t seen a single comment about the file-server install issue under the comment section.
I am using rhel 7.3 kernel version below.
vmlinuz-3.10.0-514.21.1.el7.x86_64
Output from groupinstall error.
[root@ipa ~]# yum groupinstall -y file-server
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager
Maybe run: yum groups mark install (see man yum)
No packages in any requested group available to install or update
[root@ipa ~]# yum info nfs
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager
Error: No matching Packages to list
[root@ipa ~]# yum info nfs*
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager
Installed Packages
Name : nfs-utils
Arch : x86_64
Epoch : 1
Version : 1.3.0
Release : 0.33.el7_3
Size : 1.0 M
Repo : installed
Strangely, I get this error when testing the nfs mount from client. [root@server1 mnt]# showmount -e nfsserver.example.com
clnt_create: RPC: Port mapper failure – Unable to receive: errno 113 (No route to host)
However, it works when I disable the firewall on the server.
Strangely, when I check the mount point on the client and server, it is mounted and I can create or delete files on the client or server.
Based on the Redhat link, they state various reasons can cause this, mostly firewall block. Any thoughts?
Be advised that the NFSv4 does not use the mountd daemon, therefore the showmount will not return information about version 4 mounts. However if you NFS server has NFSv3 enabled, then you can allow mountd and rpcbind traffic on the firewall, and use the showmount command.
showmount fails with the error below. I can confirm the nfs mount works by creating and deleting files on the /mnt point on the client or server. It is just the showmount on the client that is not working. It does however work if I disable the firewall on the nfs server. One thing I noticed is there is no montd listening on port 20048 for version 4. Just version 3, not sure if that has to do with it.
Any thoughts or assistance will be appreciated.
[root@server1 mnt]# showmount -e nfsserver.example.com
clnt_create: RPC: Port mapper failure – Unable to receive: errno 113 (No route to host)
root@server1 mnt]# rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 58456 status
100024 1 tcp 32830 status
100005 1 udp 20048 mountd
100005 1 tcp 20048 mountd
100005 2 udp 20048 mountd
100005 2 tcp 20048 mountd
100005 3 udp 20048 mountd
100005 3 tcp 20048 mountd
To use the showmount command you need to allow mountd and rpcbind traffic on your firewall. Because NFSv4 does not use the mountd daemon, showmount will not return information about version 4 mounts.
Just sharing, maybe someone can make sure about this, if my nfs-server is not available and if I reboot, my client get hangs (or maybe just take more time to reboot not sure about this)
after a search in the internet I got this https://access.redhat.com/solutions/28211
I’m using VirtualBox and rhel7.0
hope that’s help
Hello,
I have a question regarding allowing specific domains for nfs. Does the below mean allow only the nfsclient.example.com?
/home/tools nfsclient.example.com(rw,no_root_squash)
How you would you set it up if the question was to allow all clients of the nfsclientX.example.com domain?
Is the *.example.com correct or it would mean allow all domain under example.com which would mean nfsclient2 nfsclient3 and so on? If a question such as this is on the exam can I just put the first octets of that domain?
Thanks.
Yes, it means only the nfsclient.example.com server.
nfsclient*.example.com should work for nfsclient1, nfsclient2, etc but I haven’t tried.
ok thanks, and allow clients from the clientZ.example.com can also be added as *.clientZ.example.com? or even IP range 192.168.20.0/24?
You can’t specify 192.168.20.0/24 but you can type 192.168.20
You can specify 192.168.20.0/24. This is from Red Hat documentation.
Used “yum install -y nfs-utils” instead of “yum groupinstall -y file-server”
Use what works for you!
Hi CertDepot,
Thanks a lot for the article, but lots of steps seem to be redundant (RHEL 7.4 / nfs4), am I missing something?
1. Directories don’t have to be 777 – since you export them as RW local permissions don’t matter.
2. I couldn’t find the group file-server, nfs-server and rpcbind seem to be working just fine.
3. SELinux rules work just fine out of the box, no need to change contexts or config.
4. There is no need to export shares explicitly, the config is reloaded during nfs restart anyway.
You shouldn’t use RHEL 7.4 when preparing the RHCSA or RHCE exams: you will be in trouble during the real exam that uses RHEL 7.2 at the most.
After, there are certainly possible shortcuts in the presented procedures. They provide you a good start, up to you to find the optimal steps (some change according to RHEL minor versions!).
Thanks. All good.