committer filter by committer.
@path/to/ filter by path in repository.
committer@path/to/ filter by committer AND path in repository.
abdef0123 filter by commit's SHA hash.
rNNN filter by SVN revision.
rNNN-rMMM filter by SVN revisions range (inclusive).
Multiple filters can be specified separated by spaces or comas in which case they'll be combined using OR operator.
|cbc18636||jhb||Aug. 17, 2021, 6:14 p.m.||cxgbei: Restructure how PDU limits are managed.
- Compute data segment limits in read_pdu_limits() rather than PDU length limits. - Add back connection-specific PDU overhead lengths to compute PDU length limits in icl_cxgbei_conn_handoff(). Reviewed by: np Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D31574cgit
|8ae86e2e||behlendorf1||Aug. 17, 2021, 5:25 p.m.||ZTS: Add tests for creation time|
|abbf0bd4||behlendorf1||Aug. 17, 2021, 5:25 p.m.||Linux 4.11 compat: statx support
Linux 4.11 added a new statx system call that allows us to expose crtime as btime. We do this by caching crtime in the znode to match how atime, ctime and mtime are cached in the inode. statx also introduced a new way of reporting whether the immutable, append and nodump bits have been set. It adds support for reporting compression and encryption, but the semantics on other filesystems is not just to report compression/encryption, but to allow it to be turned on/off at the file level. We do not support that. We could implement semantics where we refuse to allow user modification of the bit, but we would need to do a dnode_hold() in zfs_znode_alloc() to find out encryption/compression information. That would introduce locking that will have a minor (although unmeasured) performance cost. It also would be inferior to zdb, which reports far more detailed information. We therefore omit reporting of encryption/compression through statx in favor of recommending that users interested in such information use zdb. Reviewed-by: Tony Nguyen <firstname.lastname@example.org> Reviewed-by: Allan Jude <email@example.com> Reviewed-by: Brian Behlendorf <firstname.lastname@example.org> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Reviewed-by: Ryan Moeller <ryan@iXsystems.com> Signed-off-by: Richard Yao <email@example.com> Closes #8507cgit
|c66e9307||pstef||Aug. 17, 2021, 5:08 p.m.||mount.h: improve a comment about flags
The comment only specifies MNT_ROOTFS - which is set by the kernel when mounting its root file system. So it's not clear if any other flags are not quite right and for what reason.cgit
|f49931c1||pstef||Aug. 17, 2021, 5:06 p.m.||style.9: remove an outdated comment about indent(1)
indent(1) has had -ncs and -nbs for some time now.cgit
|0f402668||noreply||Aug. 17, 2021, 5:01 p.m.||zfs.4: Fix typo s/compatiblity/compatibility/|
|d9f25575||arichardson||Aug. 17, 2021, 4:44 p.m.||Mark LLDB/CLANG_BOOTSTRAP/LLD_BOOTSTRAP as broken on non-FreeBSD for now
I enabled these options again in 31ba4ce8898f9dfa5e7f054fdbc26e50a599a6e3, but unfortunately only my specific build configuration worked whereas the build with default options is still broken.cgit
|0e92585c||pstef||Aug. 17, 2021, 4:16 p.m.||fstyp: add BeFS support|
|6b88b4b5||noreply||Aug. 17, 2021, 4:15 p.m.||Remove b_pabd/b_rabd allocation from arc_hdr_alloc()
When a header is allocated for full overwrite it is a waste of time to allocate b_pabd/b_rabd for it, since arc_write() will free them without ever being touched. If it is a read or a partial overwrite then arc_read() and arc_hdr_decrypt() allocate them explicitly. Reduced memory allocation in user threads also reduces ARC eviction throttling there, proportionally increasing it in ZIO threads, that is not good. To minimize or even avoid it introduce ARC allocation reserve, allowing certain arc_get_data_abd() callers to allocate a bit longer in situations where user threads will already throttle. Reviewed-by: George Wilson <firstname.lastname@example.org> Reviewed-by: Mark Maybee <email@example.com> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12398cgit
|dff1ba09||emaste||Aug. 17, 2021, 4:10 p.m.||sysctl.9: put negative sense sysctl note in own paragraph
The sysctl man page cautions against negative-sense boolean sysctls (foobar_disable), but it gets lost at the end of a large paragraph. Move it to a separate paragraph in an attempt to make it more clear. This man page could use a more holistic review and edit pass. This change is simple and straightforward and I hope provides a small but immediate benefit.cgit
|72f0521a||noreply||Aug. 17, 2021, 3:59 p.m.||Increase default volblocksize from 8KB to 16KB
Many things has changed since previous default was set many years ago. Nowadays 8KB does not allow adequate compression or even decent space efficiency on many of pools due to 4KB disk physical block rounding, especially on RAIDZ and DRAID. It effectively limits write throughput to only 2-3GB/s (250-350K blocks/s) due to sync thread, allocation, vdev queue and other block rate bottlenecks. It keeps L2ARC expensive despite many optimizations and dedup just unrealistic. Reviewed-by: Brian Behlendorf <firstname.lastname@example.org> Reviewed-by: George Melikov <email@example.com> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Closes #12406cgit
|bb7ad5d3||noreply||Aug. 17, 2021, 3:55 p.m.||Optimize arc_l2c_only lists assertions
It is very expensive and not informative to call multilist_is_empty() for each arc_change_state() on debug builds to check for impossible. Instead implement special index function for arc_l2c_only->arcs_list, multilists, panicking on any attempt to use it. Reviewed-by: Mark Maybee <firstname.lastname@example.org> Reviewed-by: Brian Behlendorf <email@example.com> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12421cgit
|cfe8e960||noreply||Aug. 17, 2021, 3:50 p.m.||Fix/improve dbuf hits accounting
Instead of clearing stats inside arc_buf_alloc_impl() do it inside arc_hdr_alloc() and arc_release(). It fixes statistics being wiped every time a new dbuf is filled from the ARC. Remove b_l1hdr.b_l2_hits. L2ARC hits are accounted at b_l2hdr.b_hits. Since the hits are accounted under hash lock, replace atomics with simple increments. Reviewed-by: Brian Behlendorf <firstname.lastname@example.org> Reviewed-by: George Wilson <email@example.com> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12422cgit
|7f9d9e6f||noreply||Aug. 17, 2021, 3:47 p.m.||Avoid vq_lock drop in vdev_queue_aggregate()
vq_lock is already too congested for two more operations per I/O. Instead of dropping and reacquiring it inside vdev_queue_aggregate() delegate the zio_vdev_io_bypass() and zio_execute() calls for parent I/Os to callers, that drop the lock any way to execute the new I/O. Reviewed-by: Brian Behlendorf <firstname.lastname@example.org> Reviewed-by: Mark Maybee <email@example.com> Reviewed-by: Brian Atkinson <firstname.lastname@example.org> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12297cgit
|e829a865||noreply||Aug. 17, 2021, 3:44 p.m.||Use more atomics in refcounts
Use atomic_load_64() for zfs_refcount_count() to prevent torn reads on 32-bit platforms. On 64-bit ones it should not change anything. When built with ZFS_DEBUG but running without tracking enabled use atomics instead of mutexes same as for builds without ZFS_DEBUG. Since rc_tracked can't change live we can check it without lock. Reviewed-by: Brian Behlendorf <email@example.com> Reviewed-by: Matthew Ahrens <firstname.lastname@example.org> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12420cgit