committer filter by committer.
@path/to/ filter by path in repository.
committer@path/to/ filter by committer AND path in repository.
abdef0123 filter by commit's SHA hash.
rNNN filter by SVN revision.
rNNN-rMMM filter by SVN revisions range (inclusive).
Multiple filters can be specified separated by spaces or comas in which case they'll be combined using OR operator.
|a91e4790||behlendorf1||Sept. 3, 2019, 6:29 p.m.||zvol_wait script should ignore partially received zvols|
|ebeb6f23||behlendorf1||Sept. 3, 2019, 5:56 p.m.||Always refuse receving non-resume stream when resume state exists
This fixes a hole in the situation where the resume state is left from receiving a new dataset and, so, the state is set on the dataset itself (as opposed to %recv child). Additionally, distinguish incremental and resume streams in error messages. Reviewed-by: Matt Ahrens <email@example.com> Reviewed-by: Tom Caputi <firstname.lastname@example.org> Reviewed-by: Brian Behlendorf <email@example.com> Signed-off-by: Andriy Gapon <avg@FreeBSD.org> Closes #9252cgit
|1a504d27||behlendorf1||Sept. 3, 2019, 5:46 p.m.||ZTS: Fix removal_cancel.ksh
Create a larger file to extend the time required to perform the removal. Occasional failures were observed due to the removal completing before the cancel could be requested. Reviewed-by: George Melikov <firstname.lastname@example.org> Reviewed-by: John Kennedy <email@example.com> Reviewed-by: Brian Behlendorf <firstname.lastname@example.org> Signed-off-by: Igor Kozhukhov <email@example.com> Closes #9259cgit
|6988f3ed||behlendorf1||Sept. 3, 2019, 5:36 p.m.||Fix Intel QAT / ZFS compatibility on v4.7.1+ kernels|
|bb3c7a54||trasz||Sept. 3, 2019, 4:33 p.m.||Make linprocfs(4) report Tgid, Linux ltrace(1) needs it.|
|e3c3248c||mjg||Sept. 3, 2019, 3:42 p.m.||vfs: implement usecount implying holdcnt
vnodes have 2 reference counts - holdcnt to keep the vnode itself from getting freed and usecount to denote it is actively used. Previously all operations bumping usecount would also bump holdcnt, which is not necessary. We can detect if usecount is already > 1 (in which case holdcnt is also > 1) and utilize it to avoid bumping holdcnt on our own. This saves on atomic ops. Reviewed by: kib Tested by: pho (previous version) Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D21471cgit ViewVC
|4d547561||imp||Sept. 3, 2019, 3:26 p.m.||Implement nvme suspend / resume for pci attachment
When we suspend, we need to properly shutdown the NVME controller. The controller may go into D3 state (or may have the power removed), and to properly flush the metadata to non-volatile RAM, we must complete a normal shutdown. This consists of deleting the I/O queues and setting the shutodown bit. We have to do some extra stuff to make sure we reset the software state of the queues as well. On resume, we have to reset the card twice, for reasons described in the attach funcion. Once we've done that, we can restart the card. If any of this fails, we'll fail the NVMe card, just like we do when a reset fails. Set is_resetting for the duration of the suspend / resume. This keeps the reset taskqueue from running a concurrent reset, and also is needed to prevent any hw completions from queueing more I/O to the card. Pass resetting flag to nvme_ctrlr_start. It doesn't need to get that from the global state of the ctrlr. Wait for any pending reset to finish. All queued I/O will get sent to the hardware as part of nvme_ctrlr_start(), though the upper layers shouldn't send any down. Disabling the qpairs is the other failsafe to ensure all I/O is queued. Rename nvme_ctrlr_destory_qpairs to nvme_ctrlr_delete_qpairs to avoid confusion with all the other destroy functions. It just removes the queues in hardware, while the other _destroy_ functions tear down driver data structures. Split parts of the hardware reset function up so that I can do part of the reset in suspsend. Split out the software disabling of the qpairs into nvme_ctrlr_disable_qpairs. Finally, fix a couple of spelling errors in comments related to this. Relnotes: Yes MFC After: 1 week Reviewed by: scottl@ (prior version) Differential Revision: https://reviews.freebsd.org/D21493cgit ViewVC
|e46cfc25||markj||Sept. 3, 2019, 2:39 p.m.||Revert a portion of r351628 that I did not mean to commit.|
|7cdeaf33||markj||Sept. 3, 2019, 2:29 p.m.||Add preliminary support for atomic updates of per-page queue state.
Queue operations on a page use the page lock when updating the page to reflect the desired queue state, and the page queue lock when physically enqueuing or dequeuing a page. Multiple pages share a given page lock, but queue state is per-page; this false sharing results in heavy lock contention. Take a small step towards the use of atomic_cmpset to synchronize updates to per-page queue state by introducing vm_page_pqstate_cmpset() and using it in the page daemon. In the longer term the plan is to stop using the page lock to protect page identity and rely only on the object and page busy locks. However, since the page daemon avoids acquiring the object lock except when necessary, some synchronization with a concurrent free of the page is required. vm_page_pqstate_cmpset() can be used to ensure that queue state updates are successful only if the page is not scheduled for a dequeue, which is sufficient for the page daemon. Add vm_page_swapqueue(), which moves a page from one queue to another using vm_page_pqstate_cmpset(). Use it in the active queue scan, which does not use the object lock. Modify vm_page_dequeue_deferred() to use vm_page_pqstate_cmpset() as well. Reviewed by: kib Discussed with: jeff Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21257cgit ViewVC
|9d75f0dc||markj||Sept. 3, 2019, 1:18 p.m.||Map the vm_page array into KVA on amd64.
r351198 allows the kernel to use domain-local memory to back the vm_page array (up to 2MB boundaries) and reserves a separate PML4 entry for that purpose. One consequence of that change is that the vm_page array is no longer present in minidumps, which only adds pages mapped above VM_MIN_KERNEL_ADDRESS. To avoid the friction caused by having kernel data structures mapped below VM_MIN_KERNEL_ADDRESS, map the vm_page array starting at VM_MIN_KERNEL_ADDRESS instead of using a dedicated PML4 entry. Reviewed by: kib Discussed with: jeff Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D21491cgit ViewVC
|f5791174||mjg||Sept. 3, 2019, 12:54 p.m.||pseudofs: fix a LOR pfs_node vs pidhash (sleepable after non-sleepable)|
|50f14c4f||avg||Sept. 3, 2019, 12:40 p.m.||superio: fix the copyright block and update the year|
|7fb6c523||lwhsu||Sept. 3, 2019, 10:49 a.m.||Temporarily skip sys.sys.qmath_test.qdivq_s64q in CI because it is unstable|
|c5c3ba6b||dim||Sept. 3, 2019, 5:58 a.m.||Merge ^/head r351317 through r351731.|
|b903ca97||dim||Sept. 3, 2019, 5:55 a.m.||Add workarounds for obsolete std::auto_ptr usage in atf.|