King Midas vs Medusa. Indonesia LuMaT 2019, Final.

An interesting piece, I came across browsing some combinatorial problems. The plot is connected with Greek mythology. Medusa was a monstrous creature, a winged human female with living venomous snakes in place of hair. Those who gazed into her eyes would turn to stone. King Midas ruled Phrygia, a Greek city. He had ability to turn everything he touched into gold. So, you see, it’s interesting to imagine what would happen if we place them both into the ring, one vs another! Here is the story.

Problem 8, 2nd day. King Midas and Medusa alternatively play on an infinite chessboard. Initially, all of the cells of the chessboard are made of stones. In each of his turn, Midas chooses three squares which are in the same column or row, and turns them into gold. In each of Medusa’s turn, she can choose any 2 \times 2 grid square and turns its 4 cells into stone. King Midas wins if he can make a 2019 \times 2019 grid square entirely of gold. Medusa wins if she can always prevent Midas doing so. Who wins?

Some observations and motivation.

I admit, at first I had read the problem very sloppy, and it seemed as if there was a 2019\times 2019 grid board on which the action took place. So, it seemed a very easy problem then, since Midas cannot win, no matter of Medusa’s moves. Indeed, suppose Midas makes all cells of the table golden with his last move. But, in her previous move Medusa had stoned some 2\times 2 square, and the king cannot make all of its 4 cells gold just in one move. I even looked up after the name of the competition wondering was it a kindergarten one! Anyway, this observation is helpful. It shows king Midas cannot choose some big square and make it golden. Medusa always wins in some local area. If it’s possible Midas to win, he must place multiple threats of making gold squares on many well apart areas. He should use his advantage of placing threats on 3 well apart areas at a time, while Medusa could eliminate only 2 of them.

The strategy of King Midas.

We prove King Midas always can make a 2019\times 2019 gold square. Moreover, Midas, in a good mood can allow Medusa to stone any 4 cells, that are intersection of 2 rows and 2 columns.

Let M be a large enough positive integer which will be determined later. The king chooses an infinite family \mathcal Q of disjoint 2019\times 2019 squares lined up in a row. In every move he turns into gold the upper-left cells of 3 different empty squares in \mathcal Q. After M turns, he marks the upper-left corner of 3M sets in \mathcal Q and Medusa can erase at most 2M of them, since she can operate over no more than 2 sets in \mathcal Q at a time. Once Medusa operates over a square in \mathcal Q, Midas considers it as unusable and does not take any action on its cells.

Thus, after the first phase, we get M squares in \mathcal Q with golden upper-left cell. We denote the family of those 2019\times 2019 squares as \mathcal {Q}_1. Further, Midas aims at M cells, each one in distinct set in \mathcal{Q}_1 , disposed at the same positions inside the big squares, and begins turning them into gold, 3 at a time. He manages to do it in M/3 moves, while Medusa can damage at most 2M/3 sets in \mathcal{Q}_1. At this moment we have M-2M/3=M/3 sets of \mathcal{Q}_1 with two goldean cells each, disposed at the same positions. Denote that subfamily of \mathcal{Q}_1 as \mathcal{Q}_2.

Midas proceeds in the same spirit obtaining each next phase a family \mathcal{Q}_n of {M}/{3^{n-1}} sets with n golden squares each, disposed at the same positions. For n=2019\cdot 2019 there will be at fully golden 2019\times 2019 square, providing we have chosen M\ge 3^{2019\cdot 2019 -1}.


Multiple of Power of 2. Bulgarian TST, 2020, p2.

I’ve recently reviewed, in some previous posts, 3 of the problems given at the last Bulgarian team selection tests. Here is another problem. It’s pure NT and a good one in my humble opinion.

Problem 2 (Bulgarian TST, 2020, 1st day). Given two odd natural numbers a,b prove that for each n\in\mathbb{N} there exists m\in\mathbb{N} such that either a^mb^2-1 or b^ma^2-1 is multiple of 2^n.

Let us fix n\in \mathbb{N} and consider the multiplicative group G of odd residues modulo 2^n, i.e. G=\left( \mathbb{Z}/2^n\mathbb{Z}\right)^{\times}. Clearly |G|=\varphi(2^n)=2^{n-1}. It’s known G is cyclic for n=1,n=2 and G\cong C_2\times C_{2^{n-2}} for n\geq 3, where C_k denotes the cyclic group of order k. Hereafter, we assume n\ge 3. The small cases of n are similar and easier. Consider the following cyclic subgroups of G (we identify a,b as elements of G).

A:=\{a^j : j=0,1,\dots\}\,;\, B:=\{b^j : j=0,1,\dots\}

Let |A|=k, |B|=\ell and assume k\leq \ell. In fact, k\mid \ell since both k,\ell are powers of 2 (because their orders divide 2^n). Using G\cong C_2\times C_{2^{n-2}} , we represent a,b as a=(e_1, a_1), b=(e_2, b_1), e_1,e_2\in\{-1,1\}, a_1^k=1,b_1^{\ell}=1. Note that b_1^{\ell/k} and a_1 generate subgroups of the same order in C_{2^{n-2}}. But any two subgroups of a cyclic group which have the same orders are equal, hence

\displaystyle b_1^{r}=a_1.

for some r. Since e_2^r=\pm e_1 , we can conclude b^{2r}=a^2. That’s, a^2\in B and choosing m such that b^m=a^{-2}\in B, we have b^ma^2=1 .
In the case k>\ell we get b^2\in A and a^mb^2=1 for appropriate m.

Hamel Bases II. Application on Olympiad Problems. Romanian TST 1994.

In a previous post I have considered an interesting problem, which has a surprising solution using Hamel bases. Here, a similar but harder problem is presented.

Problem 1. (Romanian TST for IMO 1994, 3rd test) Let p be a prime number. Suppose that real numbers a_1, a_2,\dots, a_{p+1} have the property that, whenever one of the numbers is deleted, the remaining numbers can be partitioned into two classes with the same arithmetic mean. Show that these numbers must be equal.

Note that the numbers a_1, a_2,\dots, a_{p+1} are real, which somehow rules out using NT ideas. Still, we can try and prove the claim in case those numbers are integers. Further, it easily can be extended in case the numbers are rational. Finally, the method shown allows us to extend the proof from rationals to reals. It is done by representing each real number of the given set as a linear combination of some Hamel basis. This linear combinations have rational coefficients. The key moment. Our property is transferred over each “vector” of the basis and since the corresponding coordinates are rationals we can apply the claim to each coordinate axis and make the final conclusion.

A more general environment.

This method can be applied in the following, more general, circumstances. Suppose we must prove that some property P about a set A of reals implies another property Q of the same set. Suppose both properties P and Q involve only linear expressions over elements of A that should be satisfied. Then, there is pretty good chance the described method to do the job.
Let us proceed with the details in this particular case.

I – The integer case.

Suppose, we have p integers with sum s, which can be partitioned into two groups with the same average: one – containing k of them with sum s_1, the other – containing \ell numbers with sum s_2. Then s_1/k=s_2/\ell implying


Note that \gcd(\ell,k)=1 since k+\ell=p and p is prime. It yields \ell\mid s_2, k\mid s_1 thus s_1=rk, s_2=r\ell where r is integer. Finally, s=rp. This observation can be worded as:

  • if a set of integers could be partitioned into two subsets with the same arithmetic mean, then their sum must be multiple of p.

Using this fact for a set a_1, a_2,\dots, a_{p+1} of integers satisfying the initial conditions, it yields all a_i,i=1,2,\dots,p+1 should have the same residue modulo p. Another two observations

  • Adding the same number to all a_1, a_2,\dots, a_{p+1} leaves intact the initial property.
  • Dividing (multiplying) all a_i,i=1,2,\dots,p+1 by the same number, yields a resulting set that also satisfy the given property.

Starting from a set of numbers a_1, a_2,\dots, a_{p+1} complying to the problem requirement, we can make them all positive by adding to all of them some positive integer. Then, we conclude they have the same residue modulo p. Subtracting that residue from all of them and then dividing by p doesn’t change the property. This process should come to an end and at the final stage, the only possibility is all the numbers be zeroes. It means all numbers at the beginning are equal.

II – The rational case.

Suppose now the numbers a_1, a_2,\dots, a_{p+1} are rational. We can multiply them by a common multiple of their denominators and get integer numbers satisfying the same property. By the previous section they should be equal.

III – The real case.

Let a_i\in\mathbb{R}, 1\le i\le p+1 be real numbers. Take a Hamel basis B=\{b_i, i\in I\} of \mathbb{R} as a vector space over \mathbb{Q}. Here, I is an index set (which is uncountable). Thus

\displaystyle a_j=\sum_{i\in I}\lambda_{i,j} b_{i}\,;\, j=1,2,\dots,p+1

where for any fixed j, 1\le j\le p+1 only finitely many \lambda_{i,j} \in \mathbb{Q} are non zeros when i runs through I. Since B is a basis, the original property holds for any coordinate i\in I , i.e. for any fixed i\in I, the p+1 numbers \lambda_{i,j};j=1\dots, p+1 satisfy the given property and since they are all rationals, they are equal. Thus, a_j,j=1,2,\dots,p+1 are equal.


Pinch of Linear Algebra. A Bulgarian NMO 1977 problem.

I came across this problem again, a month ago, curious to see the problems given back in those years. The first time I saw this problem was on that same competition it was given at. In 8th grade then, and I hardly remember did I succeed or not on, most probably not! But still remember the taste of the problem, and it was delicious! It is too classical now, but it wasn’t then. So, here it is.

Problem 5 (Bulgarian NMO 1977, final round). Let Q(x) be a non-zero polynomial and k be a natural number. Prove that the polynomial P(x) = (x-1)^kQ(x) has at least k+1 non-zero coefficients.

I am going to present two solutions, based on the same idea. The first one is a more direct one but uses a little bit linear algebra. This is a solution, I came up with when had to solve it somehow. I don’t remember the official solution, but most probably it was induction on k and it’s the second solution presented. I think it’s the shortest possible. I posted the problem in AoPS, see [2], and it was the solution given there.

So, we have a polynomial, P(x), we even don’t know its degree. The only thing we know is it’s multiple of (x-1)^k and it has to have some property, some minimal number of non zero coefficients. The key idea is to ensure some other way of testing is (x-1)^k a factor of P(x) or not. Of course, it’s equivalent to P^{(j)}(1)=0, j=0,1,\dots,k-1.

First solution, a pinch of linear algebra.

For the sake of contradiction, we assume P(x) has at most k non zero coefficients. Hence

\displaystyle P(x)=a_{n_1}x^{n_1}+a_{n_2}x^{n_2}+\dots +a_{n_k}x^{n_k}

assuming n_1>n_2>\dots>n_k\ge 0. All derivatives of P, up to k-1-th must vanish at the point x=1. Thus, we get k conditions P must comply to.

\displaystyle 0=P(1)=a_{n_1}+a_{n_2}+\dots+a_{n_k}\quad \quad (1)
\displaystyle 0=P^{(j)}(1)=n_1(n_1-1)\cdots(n_1-j+1)a_{n_1}+n_2(n_2-2)\cdots(n_2-j+1)a_{n_2}+\dots+n_k(n_k-1)\cdots(n_k-j+1)a_{n_k}\,,\,j=1,2,\dots,k-1

We may look at the above k conditions as a homogeneous system of k linear equations with k unknowns a_{n_1},a_{n_2},\dots,a_{n_k}. It has to have a non zero solution, since not all of the coefficients a_{n_j} are zeroes. The general theory says: if we are looking for solutions of a system like (1), we have to look to the determinant, constructed by its coefficients ( a_{n_j} are the unknowns). In this case this determinant is

D=\displaystyle \begin{vmatrix}1 & 1 & \ldots & 1\\{n_1} & {n_2} & \ldots & {n_k}\\n_1(n_1-1)&n_2(n_2-1)&\ldots & n_k(n_k-1)\\\vdots &\vdots & \ddots &\vdots&\\n_1\cdots(n_1-k+1)&n_2\cdots(n_2-k+1)&\ldots&n_k\cdots(n_k-k+1)\end{vmatrix}

In case D\ne 0 the only solution of (1) is the zero vector. It remains to prove D\ne 0 in order to get contradiction with the fact not all a_{n_j} are zeroes.

Applying several times the rule: “adding a row of D multiplied by a scalar to another row of D, then the determinant will not change” starting from the upper rows of D and moving down, it can be obtained

D=\displaystyle \begin{vmatrix}1 & 1 & \ldots & 1\\{n_1} & {n_2} & \ldots & {n_k}\\n_1^2&n_2^2&\ldots & n_k^2\\\vdots &\vdots & \ddots &\vdots&\\n_1^{k-1}&n_2^{k-1}&\ldots&n_k^{k-1}\end{vmatrix}

But this is exactly the Vandermonde determinant and

\displaystyle D=\prod_{1\le i<j\le k} (n_j-n_i)\neq 0

Hence, contradiction and the result follows.

A more classical solution.

Induction on k. The claim is obviously true for k=0. Let it’s true up to some k and take a polynomial P(x):=(x-1)^{k+1}Q(x) where Q(x) is not identically zero. Seeking contradiction, suppose P has at most k+1 non zero coefficients. We may assume Q(0)\ne 0 since otherwise we cancel the appropriate power of x reducing it to a polynomial with the same number of non zero coefficients. Now, consider the polynomial P'(x). It has at most k non zero coefficients and it can be represented as

P'(x)=(x-1)^k Q_1(x)

where Q_1(x)\not\equiv 0. By the induction hypothesis P'(x) has to have at least k+1 non zero coefficients, contradiction.


A Family of Sets. Bulgarian TST for IMO 2020, p3.

We have considered in some previous posts two interesting problems from the recent Bulgarian team selection tests (problem 4, problem 5). Here is a combinatorial problem from the first day, which in my opinion was the most fascinating one (maybe along with the problem 5, a kind of characterization of almost additive functions). That’s why I’m including it here. It gave me pleasure to unravel it.

Problem 3. Let \mathcal{C} be a family of subsets of A=\{1,2,\dots,100\} satisfying the following two conditions.

  1. Every 99 element subset of A is in \mathcal{C}.
  2. For any non empty subset C\in\mathcal{C} there is c\in C such that C\setminus\{c\}\in \mathcal{C}.

What is the least posible value of |\mathcal{C}|?


We consider the family \mathcal{C}' of the sets that are complement to those in \mathcal{C}', i.e.

\mathcal{C}'=\{A\setminus C : C\in\mathcal{C}\}

Consider a directed graph G with vertices in \mathcal{C}'. For any C_1,C_2\in \mathcal{C}', C_1C_2 is a directed edge iff C_2 can be obtained by removing an element from C_1. This graph has one vertex, call it the root vertex, which has only out-edges, and it’s the set A\in \mathcal{C}' (A is in \mathcal{C}' because \emptyset\in \mathcal{C}). It also has exactly 100 vertices with only in-edges – the sets with only 1 element: \{a\}, a\in A. We call these vertices end vertices. Since there is a directed path from the root to any end vertex, we can construct a directed tree T in G that contains the root and all end vertices. We can assume there are no other leaves in T except the end vertices, since otherwise we can remove them. Thus, we can associate such a tree T to every family \mathcal{C} with the required properties and |T|\le |\mathcal{C}|. On the other hand, any directed tree with vertices among the subsets of A that contains the root (the set A) and all end vertices (the sets \{a\}, a\in A), can be represented as a collection of subsets with complements satisfying the problem’s initial conditions. Hence, we look for such a tree with minimum number of vertices.

Now, put n instead of 100 and let T be a tree, as described above. We replace any path of consecutive vertices with degree 2 with only one edge, so we obtain a tree (again denoted by T) any vertex of which has degree at least 3, except the root and the leaves (end vertices). To each vertex v we assign a value L(v) equal to the number of leaves that can be reached starting from v, assuming L(v)=1 if v is itself a leaf. See an example below, for n=7 .

Figure 1.

We define

\displaystyle f(T):= \sum_{\overrightarrow{uv}\in E(T)} (L(u)-L(v))

where the summation is taken over all directed edges \overrightarrow{uv}. For example f(T)=20 for the tree in figure 1. It’s easy to see f(T)+1 is exactly the number of the sets in the family T is derived from. On the other hand, for any such tree with n leaves one can construct a family of f(T)+1 sets, complying to the initial requirements.

Hereby, the problem boils down to construct a directed tree T with n leaves for which f(T) is minimal. Along these lines, denote

\displaystyle f(n)=\min_{T \text{ is a tree with n vertices} }f(T), n\ge 2\quad \quad (1)

Lemma. The minimal value of f(T) is attained when T is a balanced binary tree, by which we mean:

  1. Each vertex which is not a leaf has exactly two out-edges, so T is a binary tree.
  2. If u is vertex, which is not a leaf and u_1,u_2 are its adjacent upper vertices, i.e. \overrightarrow{uu_1},\overrightarrow{uu_2}\in E(T) then L(u_1),L(u_2) are equal or almost equal.

Proof. To prove 1) let in a minimal tree T there exist 3 out-edges uu_1, uu_2,uu_3. We erase edges uu_2,uu_3 and put another vertex u' between u and u_2,u_3 with edges uu' and u'u_2,u'u_3 thus obtaining a new tree T' for which f(T')<f(T), contradiction.
To prove 2) we proceed by induction. Suppose it’s proved for all trees with m leaves m<n. Let T be a tree with n leaves and a root u_0 and u_0u_1, u_0u_2 be its out-edges.
Assume at first both u_1,u_2 are not end vertices. By our induction hypothesis, we can assume the two trees with roots u_1,u_2 are balanced. Let u_1u_1',u_1u_1'' and u_2u_2', u_2u_2'' be their root edges correspondingly, and L(u_1')\le L(u_1''), L(u_2')\le L(u_2''). We consider a new tree obtained by removing the edges u_1u_1'', u_2u_2' and adding u_1 u_2'', u_2u_1'. The new tree T' is already balanced at the root vertex and f(T')=f(T). Again, by the induction hypothesis we can make T' balanced at all other vertices without increasing f(T').
It remains to consider the second case when one of the vertices u_1,u_2 is a leaf, say it’s u_2. By the hypothesis, we may assume the tree T_1 starting from u_1 is a balanced one. Let u_1' be a leaf of T_1 such that the path from u_1 to u_1' is the shortest one. Its length is \lfloor \log(n-1)\rfloor since T_1 is balanced. We remove the edges u_0u_1,u_0u_2, then add two additional vertices, as leaves, and connect u_1' to them making u_1' a fork. This new tree T_1' is a balanced one, and we have f(T_1')=f(T)-n+\lfloor \log(n-1)\rfloor+2, hence f(T_1')\le f(T) for n\ge 3. \blacksquare

Using this lemma and the definition (1) we get

f(n)=n+2f(n/2) ; for n even, n\ge 4
f(n)= n+f(\lfloor n/2 \rfloor)+f(\lfloor n/2 \rfloor+1) ; for n odd, n\ge 5

This recurrence formula yields f(n)=n\log n when n is a power of 2. For the other values of n we consecutively get: f(3)=5, f(6)=16, f(7)=20, f(12)=44, f(13)=49, f(25)=118, f(50)=286, f(100)=672.
Thus, the minimal number of sets in a family satisfying the initial conditions is 672+1=673.

Ivan plays tic-tac-toe. Bulgarian TST 2020, p4.

In a previous post I commented problem 5 of the Bulgarian TST 2020. Here is a very nice combo problem from the same team selection test. This is a kind of problem where the right interpretation solves it. So, indeed, the point of view does matter!

Problem 4 (Bulgarian TST,2020). Ivan plays a game on n\times n chessboard. He consecutively places checkers on the board, each time complying to the following rule. The square, the checker is placed on, must be empty and there must be even number of checkers placed on the 2n-2 squares lying on the same row and column. The game is over if Ivan cannot place a checker any more. Find the minimal number of checkers on the table after the game ends.

Solution. Consider a bipartite graph G (R,C) with vertices being the rows and columns of the n\times n table, thus |R|=|C|=n. We may interpret an edge in G, connecting r\in R and c\in C, as the square where r and c intersect. The game being played can be translate as follows.
Initially G(R,C) has no edges. Each move connects a vertex r\in R with a vertex in c\in C providing the degrees of r,c have the same parities. Find the minimum number of edges when the game is over.

The key observation. After each move (an edge being created) the number of vertices with even (odd) degrees in R and C is the same. Indeed, each new edge flips the parities of its incident vertices.
Suppose at the final position R_0,R_1 are the vertices with even, resp. odd degree in R; and C_0,C_1 – the vertices with even/odd degrees in C. Denote k:=|R_0|=|C_0|; \ell := |R_1|=|C_1|. We cannot add any edge, so all edges between R_0 and C_0 are drawn. The same holds for R_1 and C_1. It means

|E(G)|\geq k^2+\ell^2\,,\, k+\ell=n\quad (1)

The minimum value of the RHS is attained when k and \ell are equal or almost equal. It remains a bit more work to be done in order to nail the right estimate. The parity of n is essential, and we consider the two possible cases.

1) n is odd, n=2k+1. By (1) it follows

E(G)\ge k^2+(k+1)^2

The estimate can be attained. Partition R, resp. C into two sets R_0,R_1, resp. C_0,C_1 where |R_0|=|C_0|=k, |R_1|-|C_1|=k+1. It’s enough to construct a valid sequence of edges leading to a final position where all vertices in R_0 are connected to all vertices in C_0 and all vertices in R_1 are connected to all vertices in C_1 and no more edges are drawn. Since |R_0|, |R_1| have different parities, it’s a final position, no more edge can be drawn. To construct a valid sequence of moves leading to full bipartite graph G(X,Y), where |X|=|Y|=\ell, we make \ell steps. At each step i=0,1,\dots,\ell-1 we connect x_s with x_{s+i}\,,\,s=1,2,\dots,\ell where s+i is taken modulo \ell.

2) n is even, n=2k. The above construction is no more valid since |R_0|,|R_1| have the same parity, no matter how R is partitioned.
Let G(R,C) be a final position, R_0, C_0 be vertices with even degrees, R_1,C_1 – the odd ones and |R_0|=|C_0|=k+s, |R_1|=|C_1|=k-s, 0\le s\le k. Then, all edges between R_0 and C_0 are present and the same holds for edges between R_1,C_1. There must exist additional edges, because the degrees of, say R_1 and C_1, have to be increased with at least 1. Thus, there are at least 2(k-s) additional edges. Hence

E(G)\ge (k+s)^2+(k-s)^2+2(k-s)=
=2k^2+2k +2s^2-2s\ge 2k^2+2k

The equality is attained when s=0 or s=1. The example is a bit harder than in the previous case. It’s possible to obtain a final configuration when s=1.
We partition R, resp. C into two sets, R_0, R_1, resp. C_0,C_1 with |R_0|=|C_0|=k-1, |R_1|=|C_1|=k plus two additional vertices r\in R, c\in C. We initiate a process of making full bipartite graphs G_1(R_0,C_0), G_2(R_1,C_1) as shown above, and at each step switch \{r,v\} to G_1 and G_2. Finally, we obtain two full bipartite graphs G_1(R_0,C_0) and G_2'(R_1\cup\{r\}, C_1\cup\{c\}) and moreover r,c are connected to all vertices in C_0 resp. R_0.

Bulgarian TST 2020, problem 3 >>

Bulgarian NMO 2020, Part III. Center of Gravity Estimate.

I’ve already commented on two problems from the Bulgarian NMO 2020 (problem 5 and problem 6). Here is another problem from the same competition.

Problem 2. Let b_1, b_2,\dots,b_n be real numbers with sum 2 and a_0,a_1,\dots, a_n be real numbers satisfying: a_0=a_n=0 and |a_i-a_{i-1}|\leq b_i, i=1,2,\dots,n. Prove that

\displaystyle \sum_{i=1}^n(a_i+a_{i-1})b_i\leq 2

Proposed by Nikolay Nikolov.

Very nice problem. An inequality, because many people still want their inequalities, but not a standard one. Not relying on some routine well known tricks! One must think it with his own head. I plan to give a solution, based on an idea, different from the two official ones. But let me first present two physics interpretations.

Center of Gravity interpretation.

We can write the inequality like

\displaystyle \sum_{i=1}^n\frac{a_i+a_{i-1}}{2}b_i\leq 1 \quad (1)

Now, interpret every interval [a_{i-1},a_{i}], i=1,2,\dots,n as a piece of a stick with mass b_i. Just for convenience, we can assume all a_i,i=1,2,\dots, n are non negative, since making them positive increases the LHS of (1) So, we obtain a stick S, which may be folded multiple times in some places and with length \ell=\max \{a_i:i=1,\dots,n\}\le 1. The center of gravity c of S is obtained as

c=\displaystyle \frac{1}{b_1+b_2+\dots+b_n}\sum_{i=1}^n\frac{a_i+a_{i-1}}{2}b_i=\frac{1}{2}\sum_{i=1}^n\frac{a_i+a_{i-1}}{2}b_i

But intuitively, c\ge 1/2 since we can move the center of gravity of S to the right by shifting the pieces of S folded more than 2 times to the right. So, c is maximal when the stick has length exactly 1 in which case c=1/2. This is of course not a rigorous explanation, but it helped me to solve it the way presented at the last paragraph.

Another physics interpretation.

Actually, it’s a fully rigorous solution. Suppose a point x=x(t) is moving along Ox axis depending on time t\in (0,2). It moves with constant speed v_i=(a_i-a_{i-1})/b_i between every two points a_{i-1},a_i, thus it takes it time \Delta t_i=b_i to do this. Let at the moments t_0=0,t_1,\dots,t_n=2 the point is at the positions: x(t_0)=a_0=0,x(t_1)=a_1,\dots,x(t_n)=a_n=0.. Since |a_i-a_{i-1}|\le b_i the speed x'(t) of x is at most 1 at any moment t\in[0,2]. It easily follows

\displaystyle \frac{x(t_{i-1})+x(t_i)}{2}(t_{i+1}-t_i)=\int_{t=t_{i-1}}^{t_i}x(t)\,dt


\displaystyle \sum_{i=1}^n\frac{a_i+a_{i-1}}{2}b_i=\int_0^2 x(t)\,dt\quad (2)

Since x(0)=x(2)=0, it follows \displaystyle x(t)=\int_0^t x'(u)\,du=\int_{2}^t x'(u)\,du, hence we get

\displaystyle \int_0^2 x(t)\,dt=\int_0^1\int_0^t x'(u)\,du\,dt+\int_1^2\int_2^t x'(u)\,du\,dt\le
\displaystyle \leq \int_0^1\int_0^t 1\,du\,dt+\int_1^2\int_t^2 1\,du\,dt=\int_0^1 t\,dt+\int_1^2(2-t)\,dt=1

Omitting the phisics notion, it actually is a rigorous math solution.

Another solution.

Take any two points a_{i-1},a_i and suppose a_{i-1}\le a_i (the other case is the same). Let a_{i-1}=x_0\le x_1\le\dots \le x_{k}=a_i be any additional points inside (a_{i-1},a_i) and b'_j, j=1,2,\dots,k satisfy

\displaystyle b'_j=\frac{x_j-x_{j-1}}{a_{i}-a_{i-1}}b_i\,, \,j=1,2,\dots,k

Then it easily follows

\displaystyle \frac{a_i+a_{i-1}}{2}b_i = \sum_{j=1}^k \frac{x_{j}-x_{j-1}}{2}b'_j

It means, we can add, if needed, additional points to the given (a_i)‘s obtaining a set K of base knots 0=c_0<c_1<\dots<c_m and points X= (x_i)_{i=0}^k, x_0=x_k=0,k>m such that x_j\in K, j=0,1,\dots,k and if x_j=c_s then either x_{j+1}=c_{s+1} or x_{j+1}=c_{s-1}. Moreover, the LHS of (1) equals

\displaystyle \sum_{i=1}^k\frac{x_i+x_{i-1}}{2}b'_i \quad (3)

for some b'_j\ge |x_j-x_{j-1}|,j=1,2,\dots,k. Suppose there exist j\in [1..k] with

x_j=c_s, x_{j+1}=c_{s-1},x_{j+2}=c_s\quad (4)

Then we transform the sequence X=(x_j) to another one X' defined as

x_0,x_1,\dots,x_j, x_{j+2},\dots, x_r=c_m, c_m+c_s-c_{s-1},c_m,x_{r+1},\dots,x_k

and add one additional knot c_m+c_s-c_{s-1} to K. In this way, jumping from X to X', we increase the LHS of (1). Thus, we can consecutively increase the LHS of (1) till no such example (4) exists, meaning x_j increases (stepping on the knots ) from 0 to its maximal value and then decreases back to 0. Further, we can increase the distance between each two adjacent knots, such that |x _j-x_{j-1}|=b_j still increasing the LHS of (1). Finally, in this case, it easily follows

\displaystyle \sum_{i=1}^k\frac{x_i+x_{i-1}}{2}|x_j-x_{j-1}| =1

Almost additive function. Bulgarian TST 2020, p5.

A problem from the Bulgarian TST for IMO 2020, held 07/07/2020.

Problem 5 (second day). A function f : \mathbb{R} \to \mathbb{R} is given, satisfying

\displaystyle |f(x+y)-f(x)-f(y)|\le 1, \forall x,y\in\mathbb{R}

Prove there exists unique function g:\mathbb{R}\to\mathbb{R} such that

\displaystyle g(x+y)=g(x)+g(y)\text{ and }|f(x)-g(x)|\le 1, \forall x,y\in \mathbb{R}

We’ll call any such function f, almost additive. The plan is as follows. We first prove existence of g it in case the domain of f is \mathbb{Z} (instead of \mathbb{R}). In this case g is linear. This is proved in the Lemma 1 below and it’s the first essential part. Then we extend it naturally to \mathbb{Q}. At the final step a Hamel basis of \mathbb{R} over \mathbb{Q} is used to make the extension possible and here comes the second essential part. One must prove that after the extension, g still sticks close to f. The uniqueness of g is easier.
A different approach is shown here [1].

Lemma 1. f:\mathbb{Z}\to \mathbb{R} satisfies

\displaystyle |f(x+y)-f(x)-f(y)|\le 1, \forall x,y\in\mathbb{Z}\quad (1)

Then there exists a linear function g:\mathbb Z\to\mathbb R, g=kx with

|f(x)-kx|\le 1, \forall x\in \mathbb{Z} \quad (2)

Moreover, g is unique.

Proof of Lemma 1. For any fixed x\in\mathbb{Z} let L(x) be the set of all linear functions that satisfy (2) for that fixed x. Thinking of a linear function g=kx as of its coefficient k\in\mathbb{R} we may interpret L(x) as an closed interval [k_1,k_2] where k_1,k_2 are determined from the conditions k_1x=f(x)-1,k_2x=f(x)+1. We will prove every two closed intervals L(x), L(y), x,y\in\mathbb{Z} intersect, see the picture. It would imply all of them have non empty intersection.

Any two sets of linear functions intersect.

Suppose, for the sake of contradiction, L(x)\cap L(y)=\emptyset, x< y\in \mathbb{Z}. It means there exists a function g(x)=kx such that for \Delta_1:=f(x)-kx,\Delta_2:=f(y)-g(y), we have |\Delta_1|>1, |\Delta_2|>1 and \Delta_1,\Delta_2 have different signs. Let f_1(x):=f(x)-g(x). Then f_1 is also almost additive function, and f_1(x)=\Delta_1, f_1(y)=\Delta_2. Using (1) for y-x, x it easily follows |f_1(y-x)|>1 and f_1(y-x) is with the same sign as \Delta_1. Proceeding the similar way as in the Euclidean algorithm does, we obtain that |f_1(d)|>1, |f_1(2d)|>1 and f_1(d),f_1(2d) are with different signs, where d=\mathrm{gcd}(n,m). Again using (1) for x=y=d we obtain contradiction.
Now, since any two intervals L(x), L(y) have non-empty intersection, by Helly’s theorem and the finite intersection property of a family of compact sets, it follows there exists

\displaystyle k\in \bigcap_{x\in\mathbb{Z}}L(x)

It implies, |f(x)-kx|\leq 1, \forall x\in\mathbb Z. The uniqueness part is easy. Indeed, if g,h are two different linear functions we can make |g(x)-h(x)| as large as we want, which rules out the possibility they both satisfy (2). \blacksquare

Further, we prove the same claim as in lemma 1, but in case the domain of f is \mathbb Q.

Lemma 2. f:\mathbb{Q}\to \mathbb{R} satisfies

\displaystyle |f(x+y)-f(x)-f(y)|\le 1, \forall x,y\in\mathbb{Q}

Then there exists unique linear function g:\mathbb Q\to\mathbb R, g=kx with

|f(x)-kx|\le 1, \forall x\in \mathbb{Q}

Proof of lemma 2. By Lemma 1, there exists k\in \mathbb R satisfying (2). Let \frac{p}{q}\in \mathbb{Q}. Consider the set S:=\{k\cdot \frac{p}{q} : k\in \mathbb Z\}. We can again apply Lemma 2 over the set S and find g_1:= k_1x, k_1\in\mathbb R satisfying the same condition (2) but over S. Using \mathbb Z\cap S=\{kp : k\in \mathbb Z\} it easily follows k_1=k. The uniqueness argument is the same as in lemma 1. \blacksquare

At this point there are three possible ways to extend g over all reals, and that extension may not be a linear function anymore. The first one is mapping \mathbb R to some initial segment of ordinals, and using transfinite induction to construct the expansion. The second one is using the Zorn lemma, which is virtually the same argument. The third way is using a Hamel basis of \mathbb R, which again is just another variation. In all of these ways the essential part is a consistency argument needed, in order g still to stick close to f. I’ll apply the third plan.

Let B=\{b_i : i\in I\} be a Hamel basis of \mathbb R, where I is some index set (uncountable). For any i\in I consider the set S_i := \{r\cdot b_i : r\in \mathbb Q\}. Applying Lemma 2 over S we obtain k_i\in\mathbb R such that |f(rb_i)-k_irb_i|\le 1, \forall r\in\mathbb Q. We define g as

\displaystyle g\left(\sum_i r_ib_i\right):= \sum_i k_ir_ib_i

where the summation is over all finite subsets of I. For any subset J\subset I we have

\displaystyle \left|f\left(\sum_{i\in J} r_ib_i\right)-g\left(\sum_{i\in J} r_ib_i\right)\right|\leq
\displaystyle \le \left|f\left(\sum_{i\in J} r_ib_i\right) -\sum_{i\in J} f(r_ib_i)\right| +  \sum_{i\in J} \left| f(r_ib_i) -k_ir_ib_i \right|\le (2|J|-1) \quad (3)

That’s, g(x) cannot be too apart from f(x) when x is presented as a linear combination of some fixed number of basis vectors. It remains to prove

|f(x)-g(x)|\leq 1, \forall x\in \mathbb{R}

Suppose, for the sake of contradiction, it fails for some x_0=\sum_{i\in J}r_ib_i , thus

|f(x_0)-g(x_0)|>1 \quad (4)

We consider the set S:=\{rx_0 : r\in\mathbb R\}. Applying lemma 2, there exists a linear function g_1 : S\to \mathbb{R} with |f(s)-g_1(s)|\le 1, \forall s\in S. Using (3) it yields

|g(s)-g_1(s)| \le |g(s)-f(s)|+ |f(s)-g_1(s)|\le 2|J|-1+1=2|J|

Note that g is also a linear function over S and g\neq g_1 because of (4). It means |g(s)-g_1(s)| can be made as large as we want, contradiction. The uniqueness of g follows as in Lemma 1.

Bulgarian TST 2020, problem 4. >>


Bulgarian NMO, 2020, part II.

In the previous post, we considered a problem from the Bulgarian NMO, 2020. Here comes the next problem – problem 6, which in my opinion is the most interesting one given on that competition. And maybe, the most difficult one (leaving alone the already commented problem 5), thou the idea behind it is not unusual.

Problem 6. Let f(x) be a nonconstant real polynomial. The sequence \displaystyle \{a_i\}_{i=1}^{\infty} of real numbers is unbounded and

a_i <a_{i+1}<a_i+2020, \forall i\in\mathbb{N}

The integers \displaystyle \lfloor{|f(a_1)|}\rfloor , \lfloor{|f(a_2)|}\rfloor , \lfloor{|f(a_3)|}\rfloor, \dots are written consecutively one after another hereby their digits form an infinite sequence of digits \displaystyle \{s_k\}_{k=1}^{\infty}, s_k\in\{0, 1, \dots, 9\}. Let n be any natural number. Prove that every number of n digits is present in the set \displaystyle \{\overline{s_{n(k-1)+1}\,s_{n(k-1)+2}\,\cdots s_{nk}} : k\in\mathbb{N}\}.

Author: Aleksandar Ivanov.

The idea is that the sequence b_i := \lfloor{|f(a_i)|}\rfloor, i=1,2,\dots doesn’t increase too fast, so for any pattern of n digits there exists as many as we want consecutive terms of (b_i) that begin with that pattern. The official solution also exploits a variation of that idea and is similar to the first post in [1].

Solution (rather an outline).

Let \displaystyle b_i := \lfloor{|f(a_i)|}\rfloor, i\in\mathbb{N}. The following properties of the integer sequence (b_i) clearly hold.

  1. \displaystyle \lim_{i\to\infty}\frac{b_{i+1}}{b_i}=1.
  2. \displaystyle \{b_i\}_{i=1}^{\infty} is unbounded.

These two properties are enough to prove the claim. Let n\in \mathbb{N} and M:=\overline {d_1\,d_2\,\dots\,d_n},d_i\in\{0,1,2,\dots,9\} be any n-digit number. Take some large enough m\in\mathbb{N} and let

\displaystyle M_1:=\overline{d_1\,d_2\,\dots\,d_n\,\underbrace{0\,0\,\dots\, 0}_{m\text{ 0's}}}

\displaystyle M_2:=\overline{d_1\,d_2\,\dots\,d_n\,\underbrace{9\,9\,\dots\, 9}_{m\text{ 9's}}}

It follows for large enough m

\displaystyle \frac{M_2}{M_1}=\frac{M10^m+10^{m+1}-1}{M10^m}>1+\frac{1}{2M}

Because of 2), the sequence (b_i)_{i=1}^{\infty} must cross any interval [M_1,M_2]. Because of 1) we can take M_1 large enough (making m large enough), such that as many consecutive terms b_i hit the interval [M_1,M_2] as we want. It means that inside the sequence \{s_i\}_{i=1}^{\infty} there exist r consecutive patterns

\displaystyle d_1\,d_2\,\dots\,d_n\,\underbrace{*\,*\,\dots\, *}_{m\text{ digits}}

where r\in\mathbb{N} depends on m, but can be made as large as we want. At this point, we have a window of n adjacent positions stepping along those r consecutive patterns. It’s enough to take m\equiv 1\pmod n to be sure the window hits d_1\,d_2\,\dots\,d_n after passing along at most n of the patterns.


Bulgarian NMO, 2020, part I.

The final round this year was held on July 29,30. Two months later than usual, due to that mass hysteria, called Covid-19. Nevertheless, better late than never! Good problems were presented, I think. Except just one. I intend to consider some of them here in 2-3 posts. Let’s begin with the bad one.

Problem 5 (Bulgarian NMO,2020). Given n points in the plane, some of them connected with segments. Some of the segments are colored white, some others – black satisfying the following condition. There exist two cycles – one consisting of white segments, the other – of black ones, each of them passes through every one of the points exactly once.
It’s known AB and BC are white segments. Prove that all of the segments can be recolored with red and blue, such that: AB and BC be both red. Not all of the white segments are allowed to become red. There again exist one red and one blue cycle that pass through each point exactly once.

It looks promising and pleasant, but after half an hour efforts, I realized it was a hard problem. We have a 4-regular graph with two disjoint Hamiltonian cycles on it and want to prove there exists another 2 Hamiltonian cycles and moreover, with some additional condition. I searched for “Hamiltonian cycles in 4-regular graph” and 5 min. later found the theorem presented below. It still looked like a nice problem, because the problem was a very special case of that result, and there could exist some different approach to make it. In order to be more specific, I need to present that Thomason’s result. Two short definitions first.

Definitions. Given a 4-regular graph G, we say \{H_1,H_2\} is a Hamiltonian decomposition of G if H_1 and H_2 are edge disjoint Hamiltonian cycles in G. It implies G=H_1\cup H_2. Let P be the set of all Hamiltonian decompositions of G (P may be empty). For any edges w,w'\in E(G) by P(w,w') we denote the set of Hamiltonian decompositions of G for which both w,w' belong to the same cycle. Analogously, Q(w,w') denotes the set of Hamiltonian decompositions of G for which both w,w' belong to different cycles.

Theorem (A.G. Thomason, 1978). Let G be a finite undirected 4-regular graph (multigraph) of order at least 3. Then

  1. |P(w,w')| is even for any w,w'\in E(G).
  2. |Q(w,w')| is even for any w,w'\in E(G).
  3. The number of Hamiltonian decompositions of G is even.

Thus, the claim of the problem is just the first entry 1) of the above result in the special case w,w' are adjacent. Indeed, since there is one such decomposition, by 1) there must exist another one.
So, one may think that the proof of problem 5 should be easier than this theorem, since it’s just a very special corollary. The proposer of the problem was aware of the mentioned result and maybe he had found some simpler way of proving that special case!?
Finally, I gave up and looked at the official solution. I was amazed to see it was the same as Thomason’s proof. In order to prove the problem, one must prove all 1), 2), 3). In fact, the official solution just follows the Thomason’s proof.
As I see it now, Thomason wanted to prove 3). But as it appeared (as it always happens) he was forced to prove 1) and 2) and it was part of his play, pursuing the proof, and all 3 entries are strongly entangled. The proof goes by induction and depends on all of them. So, it’s easier to prove that theorem (as written) than problem 5 itself.

Here is a proof of the Thomason’s result, as presented in [1]. Very cute argument! But, my opinion is, problems like this one should never appear on math competitions. I’ll be glad if someone manages to prove this problem in a more straightforward manner.

Thomason’s result. (Decompositions of Graphs, Juraj Bosak.) see [1]

To the next part >>

[1] Juraj Bosak, Decompositions of Graphs, Taylor & Francis, 1990; Lemma 11.46, p134.