Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ThreadSanitizer support #2406

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions tests/unit/distributed_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ SEASTAR_TEST_CASE(test_map_reduce_lifetime) {
}
auto operator()(const X& x) {
return yield().then([this, &x] {
BOOST_REQUIRE(!destroyed);
assert(!destroyed);
Copy link
Contributor

@tchaikov tchaikov Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BOOST_REQUIRE macros are not thread safe, so replace them with
asserts in multithreaded environment

could you share the impact of "not thread safe" ? did you find the issue by inspecting the code, or by enabling ThreadSanitizer, and it warns at seeing this usage?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. ThreadSanitizer warns
  2. Boost.Test docs say about it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normal Seastar tests run on a single reactor thread. But this specific test deliberately starts the same code on multiple threads. And then, supposedly, you're not supposed to use BOOST_REQUIRE from multiple threads.

However, I don't think replacing BOOST_REQUIRE by assert() is the right thing to do - it will replace a clean test failure of a single test function, by a crash of the entire executable... :-(

I think it more sense to have an (atomic-variable) counter of errors, and then BOOST_REQUIRE that the counter be 0 when the do_with_distributed ends (on a single thread).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I find it really shocking - that in 2024 Boost.Test doesn't support multiple threads out of the box. It's trivial to support (especially when performance is not even an issue). It reduces whatever little confidence I had in this test framework :-(

return x.cpu_id_squared();
});
}
Expand All @@ -158,7 +158,7 @@ SEASTAR_TEST_CASE(test_map_reduce_lifetime) {
}
auto operator()(int x) {
return yield().then([this, x] {
BOOST_REQUIRE(!destroyed);
assert(!destroyed);
res += x;
});
}
Expand All @@ -169,7 +169,7 @@ SEASTAR_TEST_CASE(test_map_reduce_lifetime) {
return x.map_reduce(reduce{result}, map{}).then([&result] {
long n = smp::count - 1;
long expected = (n * (n + 1) * (2*n + 1)) / 6;
BOOST_REQUIRE_EQUAL(result, expected);
assert(result == expected);
});
});
});
Expand All @@ -186,7 +186,7 @@ SEASTAR_TEST_CASE(test_map_reduce0_lifetime) {
}
auto operator()(const X& x) const {
return yield().then([this, &x] {
BOOST_REQUIRE(!destroyed);
assert(!destroyed);
return x.cpu_id_squared();
});
}
Expand All @@ -199,7 +199,7 @@ SEASTAR_TEST_CASE(test_map_reduce0_lifetime) {
destroyed = true;
}
auto operator()(long res, int x) {
BOOST_REQUIRE(!destroyed);
assert(!destroyed);
return res + x;
}
};
Expand All @@ -208,7 +208,7 @@ SEASTAR_TEST_CASE(test_map_reduce0_lifetime) {
return x.map_reduce0(map{}, 0L, reduce{}).then([] (long result) {
long n = smp::count - 1;
long expected = (n * (n + 1) * (2*n + 1)) / 6;
BOOST_REQUIRE_EQUAL(result, expected);
assert(result == expected);
});
});
});
Expand All @@ -224,17 +224,17 @@ SEASTAR_TEST_CASE(test_map_lifetime) {
}
auto operator()(const X& x) const {
return yield().then([this, &x] {
BOOST_REQUIRE(!destroyed);
assert(!destroyed);
return x.cpu_id_squared();
});
}
};
return do_with_distributed<X>([] (distributed<X>& x) {
return x.start().then([&x] {
return x.map(map{}).then([] (std::vector<int> result) {
BOOST_REQUIRE_EQUAL(result.size(), smp::count);
assert(result.size() == smp::count);
for (size_t i = 0; i < (size_t)smp::count; i++) {
BOOST_REQUIRE_EQUAL(result[i], i * i);
assert(std::cmp_equal(result[i], i * i));
}
});
});
Expand Down
34 changes: 17 additions & 17 deletions tests/unit/scheduling_group_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_after_sg_create) {
}

for (int i=0; i < num_scheduling_groups; i++) {
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<int>(key1) = (i + 1) * factor, (i + 1) * factor);
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<ivec>(key2)[0], (i + 1) * factor);
assert(sgs[i].get_specific<int>(key1) == (i + 1) * factor);
assert(sgs[i].get_specific<ivec>(key2)[0] == (i + 1) * factor);
}

}).get();
Expand All @@ -81,7 +81,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_after_sg_create) {
return reduce_scheduling_group_specific<int>(std::plus<int>(), int(0), key1).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
}). then([key2] {
auto ivec_to_int = [] (ivec& v) {
return v.size() ? v[0] : 0;
Expand All @@ -90,7 +90,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_after_sg_create) {
return map_reduce_scheduling_group_specific<ivec>(ivec_to_int, std::plus<int>(), int(0), key2).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
});

});
Expand Down Expand Up @@ -129,8 +129,8 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_sg_create) {
}

for (int i=0; i < num_scheduling_groups; i++) {
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<int>(key1) = (i + 1) * factor, (i + 1) * factor);
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<ivec>(key2)[0], (i + 1) * factor);
assert(sgs[i].get_specific<int>(key1) == (i + 1) * factor);
assert(sgs[i].get_specific<ivec>(key2)[0] == (i + 1) * factor);
}

}).get();
Expand All @@ -139,7 +139,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_sg_create) {
return reduce_scheduling_group_specific<int>(std::plus<int>(), int(0), key1).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
}). then([key2] {
auto ivec_to_int = [] (ivec& v) {
return v.size() ? v[0] : 0;
Expand All @@ -148,7 +148,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_sg_create) {
return map_reduce_scheduling_group_specific<ivec>(ivec_to_int, std::plus<int>(), int(0), key2).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
});

});
Expand Down Expand Up @@ -191,8 +191,8 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_and_after_sg_create) {
}

for (int i=0; i < num_scheduling_groups; i++) {
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<int>(key1) = (i + 1) * factor, (i + 1) * factor);
BOOST_REQUIRE_EQUAL(sgs[i].get_specific<ivec>(key2)[0], (i + 1) * factor);
assert(sgs[i].get_specific<int>(key1) == (i + 1) * factor);
assert(sgs[i].get_specific<ivec>(key2)[0] == (i + 1) * factor);
}

}).get();
Expand All @@ -201,7 +201,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_and_after_sg_create) {
return reduce_scheduling_group_specific<int>(std::plus<int>(), int(0), key1).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
}). then([key2] {
auto ivec_to_int = [] (ivec& v) {
return v.size() ? v[0] : 0;
Expand All @@ -210,7 +210,7 @@ SEASTAR_THREAD_TEST_CASE(sg_specific_values_define_before_and_after_sg_create) {
return map_reduce_scheduling_group_specific<ivec>(ivec_to_int, std::plus<int>(), int(0), key2).then([] (int sum) {
int factor = this_shard_id() + 1;
int expected_sum = ((1 + num_scheduling_groups)*num_scheduling_groups) * factor /2;
BOOST_REQUIRE_EQUAL(expected_sum, sum);
assert(expected_sum == sum);
});

});
Expand All @@ -234,7 +234,7 @@ SEASTAR_THREAD_TEST_CASE(sg_scheduling_group_inheritance_in_seastar_async_test)
internal::scheduling_group_index(*(attr.sched_group)));

smp::invoke_on_all([sched_group_idx = internal::scheduling_group_index(*(attr.sched_group))] () {
BOOST_REQUIRE_EQUAL(internal::scheduling_group_index(current_scheduling_group()), sched_group_idx);
assert(internal::scheduling_group_index(current_scheduling_group()) == sched_group_idx);
}).get();
}).get();
}).get();
Expand Down Expand Up @@ -326,7 +326,7 @@ SEASTAR_THREAD_TEST_CASE(sg_rename_callback) {
smp::invoke_on_all([&sgs, &keys] () {
for (size_t s = 0; s < std::size(sgs); ++s) {
for (size_t k = 0; k < std::size(keys); ++k) {
BOOST_REQUIRE_EQUAL(sgs[s].get_specific<value>(keys[k])._sg_name, fmt::format("sg-old-{}", s));
assert(sgs[s].get_specific<value>(keys[k])._sg_name == fmt::format("sg-old-{}", s));
sgs[s].get_specific<value>(keys[k])._id = 1;
}
}
Expand All @@ -339,9 +339,9 @@ SEASTAR_THREAD_TEST_CASE(sg_rename_callback) {
smp::invoke_on_all([&sgs, &keys] () {
for (size_t s = 0; s < std::size(sgs); ++s) {
for (size_t k = 0; k < std::size(keys); ++k) {
BOOST_REQUIRE_EQUAL(sgs[s].get_specific<value>(keys[k])._sg_name, fmt::format("sg-new-{}", s));
assert(sgs[s].get_specific<value>(keys[k])._sg_name == fmt::format("sg-new-{}", s));
// Checks that the value only saw a rename(), not a reconstruction.
BOOST_REQUIRE_EQUAL(sgs[s].get_specific<value>(keys[k])._id, 1);
assert(sgs[s].get_specific<value>(keys[k])._id == 1);
}
}
}).get();
Expand All @@ -364,7 +364,7 @@ SEASTAR_THREAD_TEST_CASE(sg_create_check_unique_constructor_invocation) {
check() {
auto sg = current_scheduling_group();
auto id = internal::scheduling_group_index(sg);
BOOST_REQUIRE(groups.count(id) == 0);
assert(groups.count(id) == 0);
groups.emplace(id);
}
};
Expand Down