I’ve decided that every morning, I will make a random mashup of 5 problems in a suitable range, and I will try solving all of them and check how much time I used. (I will introduce the problem in the order I solved)
This problem is pretty easy, We want to choose the largest di−hi, and only use it.
We can also use the highest damage one to deal the final blow (which I didn’t observe at first).
but there was one small observation that I got wrong, I didn’t combine two observations, and wrote that if the biggest di−hi is negative, there must be no answer.
But for example if x=3, di=4, hi=5, the di−hi will be negative, but we can still kill it with the final blow.
This problem cost me around half of my total time, and 90% of the WA.
I got an idea really quickly, but it turns out that it was entirely wrong, and I didn’t notice until much later.
My thought was it is obvious that the current person would want to rush to making a palindrome as soon as possible, and the other person would just keep reversing, and after making a palindrome they just switch roles, the other person try to make it into a palindrome again, and the current person just reverses.
voidsolve(){ int n; string s; cin >> n >> s; int alice = 0, bob = 0; int altpair = 0, zeropair = 0; for(int i = 0; i < (n / 2); i++) { if(s[i] != s[n - i - 1]) { altpair += 1; } elseif(s[i] == '0' && s[n - i - 1] == '0') { zeropair += 1; } } //cerr << "ZERO: " << zeropair << endl; bob += altpair; alice += (zeropair + 1) / 2; bob += (zeropair / 2); if(alice > bob) { cout << "BOB" << endl; } elseif(alice < bob) { cout << "ALICE" << endl; } else { cout << "DRAW" << endl; } return; }
This goes wrong for a case like 0000, where in my code would cause a draw, but the optimal way would be like: 0000 -> 1000 -> 1100 -> 1110 -> 0111 -> 1111 where Alice is guaranteed to lose.
There is also problems like if the length is odd and the middle is 0, we can flip that and still makes it a palindrome.
This is such a case/observation heavy problem, and really shows that I need to practice more.
For some reason this problem is 1700, but this is the fastest one I did in this mashup, I solved it in like under 15 minutes. It’s just a very standard sliding window with combinatorics problem.
We go through every number i and just count the number of subsets with i as the last number, we can maintain the possible candidates with sliding window. You also need to know how to do modular inverse but other than that it’s trivial.
// mod defined ver vector<int> factorial(2e5 + 5); constint mod = 1e9 + 7; intmabs(int a){ //轉成 0 <= a < mod的形式 return (a % mod + mod) % mod; } intmmul(int a, int b){ returnmabs((a % mod) * (b % mod)); } intmadd(int a, int b){ // a + b returnmabs(a % mod + b % mod); } intmmin(int a, int b){ // a - b returnmabs(a % mod - b % mod); } intfastpow(int a, int n){ // calculate a^n % mod if(n == 0) return1; int half = fastpow(a, n >> 1); if(n & 1) returnmmul(mmul(half, half), a); elsereturnmmul(half, half); } intmdiv(int a, int b){ // (a / b) % mod returnmmul(a, fastpow(b, mod - 2)); } intC(int a, int b){ if(b > a) return0; returnmdiv(factorial[a], mmul(factorial[b], factorial[a - b])); } voidsolve(){ int n, m, k; cin >> n >> m >> k; vector<int> v(n); for(auto &i : v) cin >> i; sort(all(v)); int l = 0, ans = 0; for(int i = 0; i < n; i++) { while(l < i && v[i] - v[l] > k) l++; ans = madd(ans, C((i - l), m - 1)); } cout << ans << endl; return; }
Random Implement Problem, just make like three groups, need gifts, need to give gifts, and need both.
We can give/get gifts between groups (or inside both). just simulate this process and thats pretty much it.
There is definitely a better solution for this (In fact, the model solution uses graphs), but I’m lazy.
n is really small here, so we can just bruteforce it.
We can fix each number, and apply all segments that doesn’t go through this fixed point, then just maintain the maximum answer and used segments.
I personally think the idea strikes resemblance with CF 1882B. Sets and Union.
voidsolve(){ int n, m; cin >> n >> m; vector<int> v(n); vector<int> chosen; for(int i = 0; i < n; i++) { cin >> v[i]; } vector<pii> segment(m); for(int i = 0; i < m; i++) { int l, r; cin >> l >> r; l--; r--; segment[i].first = l; segment[i].second = r; } int ans = *max_element(all(v)) - *min_element(all(v)); for(int i = 0; i < n; i++) { vector<int> v2 = v; vector<int> tmpchosen; for(int k = 0; k < m; k++) { int l = segment[k].first; int r = segment[k].second; if(l <= i && i <= r) continue; for(int j = l; j <= r; j++) v2[j]--; tmpchosen.pb(k + 1); } if(*max_element(all(v2)) - *min_element(all(v2)) > ans) { ans = *max_element(all(v2)) - *min_element(all(v2)); chosen = tmpchosen; } } cout << ans << endl; cout << chosen.size() << endl; for(auto i : chosen) cout << i << " "; cout << endl; return; }
Time Complexity: O(n2m)
Afterword
I think I can easily get under two hours, or even 1 hour 45 minutes, I was stuck on that 1 problem for too long! I think I will also document each problem’s individual time.
Compared to the easy version, n is much bigger this time, so we cannot apply all segments with bruteforce.
Luckily, segment tree exists, so we can just do the same thing, except we use a lazy tag segment tree to apply the segments.
You happily coded the segment tree, thinking “such an easy 2100”, and you get TLE on TC 13. (Based on experience)
The problem here is if we still enumerate through all the segments, even if our operations are O(mlogn), the total complexity is still O(nm+mlog(n)), which doesn’t pass with the constraints.
Fortunately, there is a beautiful observation here: Let’s say we have all the segments that doesn’t include the index i, we can easily know which segments that doesn’t include i+1. The way to do this is to add the segments that end at i, and remove the segments that start at i+1. This way, each segment only gets added and removed once, and our complexity is reduced to O(n+mlog(n)).