[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq han
From: |
Jan Beulich |
Subject: |
[Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq handling |
Date: |
Thu, 18 Jun 2015 14:18:39 +0100 |
The number of slots per page being 511 (i.e. not a power of two) means
that the (32-bit) read and write indexes going beyond 2^32 will likely
disturb operation. The hypervisor side gets I/O req server creation
extended so we can indicate that we're using suitable atomic accesses
where needed (not all accesses to the two pointers really need to be
atomic), allowing it to atomically canonicalize both pointers when both
have gone through at least one cycle.
The Xen side counterpart (which is not a functional prereq to this
change, albeit a build one) can be found at e.g.
http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg02996.html
Signed-off-by: Jan Beulich <address@hidden>
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
static int handle_buffered_iopage(XenIOState *state)
{
+ buffered_iopage_t *buf_page = state->buffered_io_page;
buf_ioreq_t *buf_req = NULL;
ioreq_t req;
int qw;
- if (!state->buffered_io_page) {
+ if (!buf_page) {
return 0;
}
memset(&req, 0x00, sizeof(req));
- while (state->buffered_io_page->read_pointer !=
state->buffered_io_page->write_pointer) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- state->buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
+ for (;;) {
+ uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+ xen_rmb();
+ wrptr = buf_page->write_pointer;
+ xen_rmb();
+ if (rdptr != buf_page->read_pointer) {
+ continue;
+ }
+ if (rdptr == wrptr) {
+ break;
+ }
+ buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
req.size = 1UL << buf_req->size;
req.count = 1;
req.addr = buf_req->addr;
@@ -1005,15 +1016,14 @@ static int handle_buffered_iopage(XenIOS
req.data_is_ptr = 0;
qw = (req.size == 8);
if (qw) {
- buf_req = &state->buffered_io_page->buf_ioreq[
- (state->buffered_io_page->read_pointer + 1) %
IOREQ_BUFFER_SLOT_NUM];
+ buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+ IOREQ_BUFFER_SLOT_NUM];
req.data |= ((uint64_t)buf_req->data) << 32;
}
handle_ioreq(state, &req);
- xen_mb();
- state->buffered_io_page->read_pointer += qw ? 2 : 1;
+ atomic_add(&buf_page->read_pointer, qw + 1);
}
return req.count;
--- a/include/hw/xen/xen_common.h
+++ b/include/hw/xen/xen_common.h
@@ -370,7 +370,8 @@ static inline void xen_unmap_pcidev(XenX
static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
ioservid_t *ioservid)
{
- int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
+ int rc = xc_hvm_create_ioreq_server(xc, dom, HVM_IOREQSRV_BUFIOREQ_ATOMIC,
+ ioservid);
if (rc == 0) {
trace_xen_ioreq_server_create(*ioservid);
qemu-bufioreq-atomic-add.patch
Description: Text document
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Qemu-devel] [PATCH] xen/HVM: atomically access pointers in bufioreq handling,
Jan Beulich <=