Hey guys, life has been pretty busy lately and tough so I haven’t updated in a while, so I decided to write a short blog about some CM+ (>=1900) problems I solved in CF lately. The difficulty will we roughly sorted by how hard I feel it is.
I know this question is only 1800, but I feel like it deserves at least 1900.
The high level idea for this problem is to find how many slimes we need to accumulate from the left/right to eat this slime. Formally, for each index i, find the maximum j<i where a[j]+...+a[i−1]>a[i], and the maximum k>i where a[i+1]+...+a[k]>a[i], then the answer for i is min(k−i,i−j). It’s easy to see we can use prefix sum and binary search to obtain O(logn) per index.
The tricky part about this problem is dealing with duplicate values in the same row, as they technically cannot eat each other, and it cause alot of problems while implementing.
This problem honestly does not deserve 1900, I solve it in like 15 minutes max.
If the number of edges on the node 1 is less then D, then the answer is clearly no. Else, we want to first find “crutial edges”, which are edges connected to the node 1 that must be connected in order for the whole graph to be connected.
We can find these edges by first doing our process of building a spanning tree while skipping all edges that connects to 1. After that it’s easy to find edges that we must form to connect the whole graph.
Let’s denote the number of crutial edges as k, and the total edges connecting node 1 as K, then k<=D<=K clearly must hold.
Now, how do we actually construct the tree? A easy way is to first connect the crutial edges, then add edges connecting node 1 to get D edges, and lastly connect random edges to obtain a spanning tree, it should be obvious that this guarantees a spanning tree with D edges connecting to node 1.
Let’s say we use k′ colors, then by the pigeonhole principle, nodes that have more than k′ edges connected would clearly become a “not good” node.
It’s not difficult to se that we can always color the tree with k′ colors, and only nodes that have more than k′ edges would be a “not good” node. Consider this strategy of coloring: if a node has more than k′ edges, then we color all of them as the same color, and if the node doesn’t have more than k′ edges, we can clearly color them with all distinct colors.
We can find the number of nodes that have more than k′ edges connected by doing a simple prefix sum.
The trick here is to find a efficient way to simulate this process. Notice that only when a monster gets killed, the two monster beside it would have a chance to get killed in the next round, so there’s actually not alot of monsters we need to consider for each round.
Note: My implementation is extremely convoluted and lackluster, refer to the editorial for a better implementation.
Interesting problem, but has some points that made it a clear giveaway.
First, notice in the queries that they want v(li,ri), so we must need some way to calculate v(li,ri) in O(1) or O(logn) time.
The second giveaway is the specific mod number 9. If you remember from middle school, the way to check if a number can be divided by 9 is to check if the sum of it’s digits can also be divided by 9.
Using these two hints, we can easily find out the answer has to do with prefix sums!
We can preprocess a prefix sum and preprocess the smallest (and second smallest) L for each number from 0 to 9.
Let’s define the cost of a teleporter as min(ai+i,ai+n+1−i), which is just the minimum cost of walking to it from the front or to the back. It’s clear that after the initial teleporter, we will choose the teleporters based from lowest cost to highest cost.
The problem now is how to determine the first teleporter we want to use? We can iterate using each teleporter as the first one, and using binary search + prefix sum to determine how many teleporters we can use after the first one. The problem that encounters with this is when you binary search, you might include the one where you already used as the initial teleporter, so you would need to keep track and deduct the value when your binary search includes that teleporter.
vi v(2e5 + 5, 0); vector<pii> order; vi prefix; voidsolve(){ int n, c; cin >> n >> c; order.clear(); prefix.clear(); for(int i = 1; i <= n; i++) { cin >> v[i]; order.pb({min(v[i] + i, v[i] + (n - i + 1)), i}); } sort(all(order)); prefix.pb(0); for(int i = 0; i < n; i++) { prefix.pb(prefix.back() + order[i].first); } int ans = 0; for(int i = 0; i < n; i++) { int cst = c - (v[order[i].second] + order[i].second); int l = 0, r = n; while(r - l > 1) { int m = (r + l) / 2; int cmp = prefix[m]; if(m >= (i + 1)) cmp -= order[i].first; if(cst >= cmp) { l = m; } else { r = m; } } int ret = prefix[r]; int use; if(r >= (i + 1)) ret -= order[i].first; if(cst >= ret) use = r; else use = l; if(use < (i + 1) && cst >= 0) use += 1; ans = max(ans, use); } cout << ans << endl; return; }
A really tricky dp problem that stomped me for a good while, but the solution is one of the shortest in this list.
Let dp[u] denote the number of non-empty subtrees rooted at node u such that there is no pair of vertices where a node is the ancestor of the other.
We can get the transition dp[u]=∏(dp[vi]+1), where vi is the children of u.
This dp state is essentially picking the combination of subtrees (we add 1 for when we don’t choose an empty set for that subtree). When all of them were empty, it means that only the node u is dangerous.
Now to calculate the answer, we need to divide into two parts:
if there is no pair of vertices where a node is the ancestor of the other, then the answer would be dp[1]+1.
But we can also tolerate exactly 1 pair of vertices where one is the ancestor of the other. Lets say if u is the ancestor of another vertex, then the number of ways for this to happen is ∑dp[vi], where vi is the children of u.
We then consider the case for every node, and the answer becomes (dp[1]+1)+dp[2]+...+dp[n]=1+∑dp[i].
I genuinely do not understand how this problem is 1800…maybe im just really bad at dp.
Let’s day for a node k with subtrees size S1,...,Si, and in each subtree we can assign the values as either bigger than the node k or smaller, then our goal is to group them into two groups, such as the product of the size of the two groups are maximum.
The first claim is we can always get the optimal upperbound with some kind of assignment, I actually do not know how to prove this and I just winged it when I wrote it but if you are interested you can check out the editorial :).
Now, the problem becomes cutting the set S1,...,Si into two sets, such that the product is maximized. This is actually an NP hard problem, but ∑Si is actually pretty small, so we can tranform it into a subset sum problem.
Let S=∑Si, then we can get max((S−j)⋅j) as the answer if j is a possible subset sum.
It should be pretty easy to write an O(n2) solution for subset sum, so just do it for each node and yay!
Theres one small problem… doing subset sum is O(n2), and actually wouldn’t ∑Si become O(n3)? even if it doesn’t, we need to do it for each node, then wouldn’t it become O(n3)?
The answer is actually no (This is why I originally got stuck because I also though it at least has to be O(n3))! Refer to #7 of this blog, really interesting property to think about tree dps!
Note: I was not able to solve this problem even following the editorial, the time limit is very strict but I feel like still including this problem and mentioning the techniques use for this problem.
The are two main optimizations for this problem: O(nn) subset sum trick and using bitsets to optimize the dp by 641. Very tricky stuff :)
The editorial uses XOR hashing, but I feel like my alternative solution using graph theory is way more intuitive.
We can transform the whole problem as a graph theory problem. Let the bulbs be nodes, and we will add a directed edge i to j if the bulb j is between the two bulb i, which basically means if i is lighted, we can light up color j by the second operation.
It’s not difficult to see the answer for the first problem is the number of components in the graph (It’s easy to see that with this way of construction, we can always find a node in the subgraph that can reach all other nodes), and the answer for the second problem is the product for each subgraph, the number of nodes that can reach every other node in that subgraph.
The first answer is very easy to obtain, but the second one is slightly tricky. A naive way is to dfs from each node, and if they can reach all other nodes then we include it in. Unfortunately this fails as the complexity could go up to O(n3).
Here is where the trick Strongly Connected Component (SCC) comes in! The main use for SCC is we can “shrink” a directed graph into a Directed Acyclic Graph (DAG), and it’s easy to see after shrinking, only the first starting node (the node with in degree 0) can reach all other nodes! So we just need to find how many nodes does this starting node consist.
Our whole algorithm is as follows:
Build graph -> Do SCC -> Count the in degree of the nodes for the new DAG -> calculate how many nodes does each new node consist -> Calculate answer.
I use kosaraju algorithm for SCC, because im too lazy to learn tarjan TMT.
Our solution for easy version was bounded by the graph building process which was O(n2), but there is actually a way to optimize the graph to only 2n edges!
I did not really understand how this optimization works, but I’ll still include it here and maybe try understanding it some day.
Afterwards: Chicago really has some good food! Although serious note, I really need to get myself back on the track after this spring break, I know I can do it, fighting! And in terms of CF, I think I’ll start doing more 2000~2200 problems, and after that I’ll start doing contests again.