committer filter by committer.
@path/to/ filter by path in repository.
committer@path/to/ filter by committer AND path in repository.
abdef0123 filter by commit's SHA hash.
rNNN filter by SVN revision.
rNNN-rMMM filter by SVN revisions range (inclusive).
Multiple filters can be specified separated by spaces or comas in which case they'll be combined using OR operator.
daddbdc7 | behlendorf1 | June 13, 2019, 12:15 a.m. | Fix lockdep warning on insmod
sysfs_attr_init() is required to make lockdep happy for dynamically allocated sysfs attributes. This fixed #8868 on Fedora 29 running kernel-debug. This requirement was introduced in 2.6.34. See include/linux/sysfs.h for what it actually does. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Olaf Faaland <faaland1@llnl.gov> Signed-off-by: Tomohiro Kusumi <kusumi.tomohiro@gmail.com> Closes #8868 Closes #8884cgit |
|
61b54f34 | bdrewery | June 12, 2019, 11:09 p.m. | Don't delete .depend files outside of cleandepend. | |
efc5c442 | dim | June 12, 2019, 9:10 p.m. | Upgrade our copies of clang, llvm, lld, lldb, compiler-rt, libc++, | |
a04cd5cd | alc | June 12, 2019, 8:38 p.m. | Change pmap_demote_l2_locked() so that it removes the superpage mapping on a
demotion failure. Otherwise, some callers to pmap_demote_l2_locked(), such as pmap_protect(), may leave an incorrect mapping in place on a demotion failure. Change pmap_demote_l2_locked() so that it handles addresses that are not superpage aligned. Some callers to pmap_demote_l2_locked(), such as pmap_protect(), may not pass a superpage aligned address. Change pmap_enter_l2() so that it correctly calls vm_page_free_pages_toq(). The arm64 pmap is updating the count of wired pages when freeing page table pages, so pmap_enter_l2() should pass false to vm_page_free_pages_toq(). Optimize TLB invalidation in pmap_remove_l2(). Reviewed by: kib, markj (an earlier version) Discussed with: andrew MFC after: 3 weeks Differential Revision: https://reviews.freebsd.org/D20585cgit ViewVC |
|
d9b4bf06 | behlendorf1 | June 12, 2019, 8:13 p.m. | fat zap should prefetch when iterating
When iterating over a ZAP object, we're almost always certain to iterate over the entire object. If there are multiple leaf blocks, we can realize a performance win by issuing reads for all the leaf blocks in parallel when the iteration begins. For example, if we have 10,000 snapshots, "zfs destroy -nv pool/fs@1%9999" can take 30 minutes when the cache is cold. This change provides a >3x performance improvement, by issuing the reads for all ~64 blocks of each ZAP object in parallel. Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> External-issue: DLPX-58347 Closes #8862cgit |
|
d9cd66e4 | behlendorf1 | June 12, 2019, 8:06 p.m. | Target ARC size can get reduced to arc_c_min
Sometimes the target ARC size is reduced to arc_c_min, which impacts performance. We've seen this happen as part of the random_reads performance regression test, where the ARC size is reduced before the reads test starts which impacts how long it takes for system to reach good IOPS performance. We call arc_reduce_target_size when arc_reap_cb_check() returns TRUE, and arc_available_memory() is less than arc_c>>arc_shrink_shift. However, arc_available_memory() could easily be low, even when arc_c is low, because we can have tons of unused bufs in the abd kmem cache. This would be especially true just after the DMU requests a bunch of stuff be evicted from the ARC (e.g. due to "zpool export"). To fix this, the ARC should reduce arc_c by the requested amount, not all the way down to arc_size (or arc_c_min), which can be very small. Reviewed-by: Tim Chase <tim@chase2k.com> Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: George Melikov <mail@gmelikov.ru> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> External-issue: DLPX-59431 Closes #8864cgit |
|
10269e02 | behlendorf1 | June 12, 2019, 8:03 p.m. | Fix typo in vdev_raidz_math.c
Fix typo in vdev_raidz_math.c Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: George Melikov <mail@gmelikov.ru> Signed-off-by: Brad Forschinger <github@bnjf.id.au> Closes #8875 Closes #8880cgit |
|
4b3f7927 | oshogbo | June 12, 2019, 7:31 p.m. | fileargs: add wrapping/unwrapping functions | |
a8024393 | oshogbo | June 12, 2019, 7:29 p.m. | geli: style nits | |
e7630efb | oshogbo | June 12, 2019, 7:29 p.m. | geli: partially revert r348709 | |
705aad98 | shurd | June 12, 2019, 6:07 p.m. | Some devices take undesired actions when RTS and DTR are
asserted. Some development boards for example will reset on DTR, and some radio interfaces will transmit on RTS. This patch allows "stty -f /dev/ttyu9.init -rtsdtr" to prevent RTS and DTR from being asserted on open(), allowing these devices to be used without problems. Reviewed by: imp Differential Revision: https://reviews.freebsd.org/D20031cgit ViewVC |
|
0026d8cc | jhb | June 12, 2019, 4:49 p.m. | Remove a spurious break when setting up a 64-bit memory BAR. | |
15242987 | jtl | June 12, 2019, 4:06 p.m. | The current IPMI KCS code is waiting 100us for all transitions (roughly
between each byte either sent or received). However, most transitions actually complete in 2-3 microseconds. By polling the status register with a delay of 4us with exponential backoff, the performance of most IPMI operations is significantly improved: - A BMC update on a Supermicro x9 or x11 motherboard goes from ~1 hour to ~6-8 minutes. - An ipmitool sensor list time improves by a factor of 4. Testing showed no significant improvements on a modern server by using a lower delay. The changes should also generally reduce the total amount of CPU or I/O bandwidth used for a given IPMI operation. Submitted by: Loic Prylli <lprylli@netflix.com> Reviewed by: jhb MFC after: 2 weeks Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D20527cgit ViewVC |
|
3a3ab509 | ian | June 12, 2019, 4:05 p.m. | Don't attempt to include hwpmc support for armv6, we're missing some of the | |
664104b4 | bdragon | June 12, 2019, 3:58 p.m. | Fix PPC970 boot after r348783
r348783 changed the behavior of the kernel mappings and broke booting on G5. - Split the kernel mapping logic out so that the case where we are running from the wrong memory space is handled using identity mappings, and the case where we are not using a DMAP is handled by forcibly mapping the kernel into the dmap range as intended by r348783. Reported by: Mikael Urankar Reviewed by: luporl Approved by: jhibbits (mentor) Differential Revision: https://reviews.freebsd.org/D20608cgit ViewVC |