Fortunately ZFS simplifies the intricacies of the NFS part, automating most of the tasks and providing a reasonably hierarchical view of the NFS options associated with an exported file system.
By the way, while talking about NFS I'm assuming NFSv4 which is used by default on Solaris since a few releases already. In addition, I'm not assuming any DNS installation, just NIS, but anyway that shouldn't be an issue in any case.
I'd say that one of the very first steps is to double-check the proper settings of NFSv4 user and group id mapping. For all the details, check nfsmapid(1M) man page. For the simpler big hand override, simply set a desired value of nfsmapid_domain property of a location profile with sharectl(1M). In fact, this should already be OK with the assumed setup of NIS services.
# sharectl get -p nfsmapid_domain nfs
nfsmapid_domain=business.corp
I'll lazily (on a real scenario consider setting /export on a dedicated device outside rpool) create a ZFS file system, set the ACL as per a previous example, and finally share it via NFS to illustrate the whole idea:
# zfs create rpool/export/data
# zfs set quota=5G rpool/export/data
# zfs get share.all rpool/export/data | grep nfs
rpool/export/data share.nfs off default
rpool/export/data share.nfs.* ... default
# ll -dV /export/data
drwxr-xr-x 2 root root 2 Oct 29 10:28 /export/data
owner@:rwxp-DaARWcCos:-------:allow
group@:r-x---a-R-c--s:-------:allow
everyone@:r-x---a-R-c--s:-------:allow
The following will allow user1 to create (but not delete) any number of subdirectories directly under /export/data. Each subdirectory contents will be fully manageable, even for wiping their contents out.
# chmod A+user:user1:rxpaRcs:allow /export/data
# ll -dV /export/data
drwxr-xr-x 2 root root 2 Oct 29 10:28 /export/data
user:user1:r-xp--a-R-c--s:-------:allow
owner@:rwxp-DaARWcCos:-------:allow
group@:r-x---a-R-c--s:-------:allow
everyone@:r-x---a-R-c--s:-------:allow
Prepare all the necessary NFS options. For the sake of simplicity and minimality, but not giving up some access control, I'll just use the rw= NFS option referencing a proper netgroup.
# zfs set share.nfs.sec.sys.rw=desktops rpool/export/data
# zfs get share.nfs.sec.sys.rw rpool/export/data
NAME PROPERTY VALUE SOURCE
rpool/export/data share.nfs.sec.sys.rw desktops local
Finally, as the last step after double-checking everything, open the gate by activating the NFS sharing by switching on a simple ZFS file system property.
# zfs set share.nfs=on rpool/export/data
# zfs get share.all rpool/export/data | grep nfs
rpool/export/data share.nfs on local
rpool/export/data share.nfs.* ... local
rpool/export/data share.protocols nfs local
# zfs get share | grep data
...,path=/export/data,prot=nfs,sec=sys,rw=desktops ...
# share | grep data
rpool_export_data /export/data nfs sec=sys,rw=desktops
# showmount -e | grep data
/export/data desktops
And that's all folks!
dt-10:~# mount nfs-1:/export/data /mnt
dt-10:~# su - user1
user1@dt-10:~$ mkdir /mnt/test
user1@dt-10:~$ ll -dV /mnt/test
drwxr-xr-x 2 user1 staff 3 Oct 29 11:32 /mnt/test
owner@:rwxp-DaARWcCos:-------:allow
group@:r-x---a-R-c--s:-------:allow
everyone@:r-x---a-R-c--s:-------:allow
It may be worthy to add a few notes:
- If the underlying group id happens to change, then it may be necessary to restart the nfs/mapid service on the NFS client in order to the new group id get visible.
- If the access list of the NFS server gets changed then it's necessary to repeat the share command "over the previous one" in order to the new access list published to NFS clients. The simplest way to achieve this is refreshing the SMF service as follows:
# svcadm refresh nfs/server