First email
Subject: [Brilliant] New feedback program for problem creators
Hey <Author>,
We greatly appreciated your recent problems for their originality and creativity. We think you have great potential to further enhance your write-ups and have created a program for authors like you. In this, you will get our personalized feedback on your future problems about their completeness and clarity with suggestions like best theme or options for the problem. This will help your problems to stand out from the crowd and guide you to become a great author. To benefit fully from the experience, we ask you to devote about 3 hours of your time each week for a month.
If you’re interested in participating in September, reply YES to this email. To get started, you can read this Wiki page that explains the attributes of a great problem - Tempting to answer, Easy to get started, Imagery and Familiarity with setup - and use them to help your problems get popular.
Thanks, Brilliant Staff
Second email: For a positive response
Hey <Author>,
Thanks for showing interest in joining us to improve your write-ups and problem writing skills. You are a step closer towards becoming a great problem writer.
To get started, submit your future problem write-ups through this form. We will then reply to you in our chat room (details below) with feedback on how to improve the quality, correctness, and phrasing of your problem. This will allow you to update your write-up and post a great problem.
Chat room: Slack Brilliant Lounge is our chat room for communicating with other Brilliant members. Within the next day, you will receive an email from Slack with the subject of “Brilliant Lounge Bot has invited you to join Brilliant Lounge on Slack”. It will provide details on how to activate your account and join the chatroom.
Thanks, Brilliant staff.
Email for “if they say no” response
Thanks for responding. If you wish to join the problem in a later month, simply let us know.
We appreciate the problems that you’ve been creating and look forward to more of them.
Thanks, Brilliant staff.
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
Aditya Kumar's flawed+deleted question: see report section
∫0∞x4(∫0xtsintdt)dx
If the value of the integral above is equal to −BA, where A and B are coprime positive integers, find A+B.
This integral does not converge. We are asked to integrate ∫0∞x4(∫0xtsintdt)dx The integrand f(x)=x4(∫0xtsintdt) exists for all x, but f(x)∼21πx4 as x→∞, and hence f(x) cannot be integrable on [0,∞).
Sir can you prove that the solution obtained by ramanujan's master theorem is wrong?
Whether x4 is inside or outside the inner integral is irrelevant; it does not change the situation.
Ramanujan's Master Theorem needs to be used carefully. It does not work for just any function. Ramanujan's initial presentation of this Theorem was based on a series of formal arguments, which G.H.Hardy pointed out, when he proved the Theorem initially, were often invalid.
Basically, Ramanujan's Master Theorem is a formal identity, which is only true for certain types of functions. True, many functions are included in the requirements of the Theorem, but these requirements are such as to make the integrals converge! By definition, therefore, your function cannot satisfy the Theorem. For example, here is one formal statement of RMT, based on properties of the Mellin transform:
Let ϕ be a function which is analytic on {z∈C:Rez≥−δ} for some 0<δ<1. Suppose that, for some A<π and C,P>0, the function ϕ satisfies the growth condition ∣∣ϕ(v+iw)∣∣≤CePv+A∣w∣v≥−δ,w∈R., Then ∫0∞xs−1(n=0∑∞(−1)nϕ(n)xn)dx=sinsππϕ(−s),0<Res<δ.
I suspect that your function does not satisfy this condition.
Ultimately, the region of validity of RMT (the values of s for which it is true are basically the values of s for which the ingegral on the LHS converges. If the integral ∫0∞xs−1(n=0∑∞(−1)nϕ(n)xn)dx converges, then the RMT can probably be used to evaluate it. It cannot, however, be used to give values to nonconvergent integrals.
You can integrate the Sine integral, but more care is needed. It can be shown, for example, that ∫0∞xs−1(∫0xtsintdt−21π)dx=−sΓ(s)sin(21πs),0<Res<1. It is most likely that the RMT can be used to establish this one! However, the range of validity of s does not include s=5, which is what I think you used to get your answer of −54!.
Formal manipulations need to be handled with care; Ramanujan, for example, manipulated series formally and came up with 1+2+3+4+⋯=−121. While it is true that ζ(−1)=−121, and 1+2+3+⋯ is what we obtain if we formally substitute s=−1 into the definition of ζ(s), that does not get around the fact that the series does not converge, to −121 or anything else. Of course, it has to be said that high-powered quantum physicists use this and similar formulae every day, but that is only as a short-hand for performing the analytic continuation analysis required to obtain ζ(−1) from the series formula for ζ(s) (which is only valid for Res>1).
Image
This is the original proof provided by Ramanujan. So you meant to say that there are some functions ϕ(x) the summations don't converge?
Even I was surprised that 1+2+3+4+...=121. Is there any theoretical proof that it is wrong? Obviously practically anyone can prove that it is wrong.
Talking about this problem it would be best to delete it and post a new one.
Thanks for the explanation!
Can you provide a good source where I can properly learn how to use RMT?
I am quoting G.H.Hardy, who said that some of applications in the original paper were inapplicable. The scale of the graphic is pretty small, but Corollary 1 is the result we have been talking about, but the nature of the function ϕ is not specified. That is just was I was talking about; until the type of function is specified, this is just a formal statement. I believe that some of the cases that Ramanujan gave as examples of his idea were based on functions ϕ for which the result was not applicable.
Minus 121, not plus 121. The series diverges to infinity - what more proof is needed? The theory of complex functions is strange; the fact that the zeta function has a perfectly reasonable value at −1 which has, on the face of it, nothing whatsoever to do with the formulae defining the zeta function elsewhere is one of the weird consequences of the theory of analytic continuation.
I am afraid there is no real way to redeem your question.
Combinatorics approach for triple summation
This is something of a special case of Euler's work on the generating function of the partition function P.
Let us let S(n) be the sum S(n)=a1,a2,…,an distincta1,a2,…,an≥0∑2−(a1+a2+⋯+an)=n!0≤a1<a2<⋯<an∑2−(a1+a2+⋯+an), Then S(n)=n!N≥21n(n−1)∑∣∣A(N,n)∣∣2−N, where A(N,n) is the set A(N,n)={a∈N∪{0}n∣∣∣a1<a2<⋯<an,j=1∑naj=N},N≥21n(n−1). If we also consider the set B(L,n)={b∈N∪{0}n∣∣∣j=1∑n(n+1−j)bj=L},L≥0, then the mapping X:B(N−21n(n−1),n)→A(N,n) given by the formula (Xb)j=j−1+r=1∑jbr1≤j≤n,b∈B(N−21n(n−1),n) is a bijection for any N≥21n(n−1), and hence ∣∣A(N,n)∣∣=∣∣B((N−21n(n−1),n)∣∣.
But ∣∣B(L,n)∣∣ is the coefficient of tL in the series expansion of the product r=1∏n(1−tr)−1, and hence ∣∣A(N,n)∣∣ is the coefficient of tN in the series expansion of the product Qn(t)=t21n(n−1)r=1∏n(1−tr)−1=r=1∏n(1−trtr−1). Thus it follows that S(n)=n!Qn(21)=n!r=1∏n2r−12=n!2n(r=1∏n(2r−1))−1. In this case, we have S(3)=1×3×73!×23=716, making the answer 16+7=23.
I have so many questions and I don't know where to begin, so let me just ask whatever that appears the most confusing:
Question 01: How did you convert this first S(n) to the second S(n)?
S(n)=a1,a2,…,an distincta1,a2,…,an≥0∑2−(a1+a2+⋯+an)=n!0≤a1<a2<⋯<an∑2−(a1+a2+⋯+an),
to
S(n)=n!N≥21n(n−1)∑∣∣A(N,n)∣∣2−N,
(Yes, I've read your definition of A(N,n)=… already and I'm still clueless)
Question 02: what does a∈N∪{0}n means? Does it mean a vector called a, with all non-negative integer elements right?
Question 03: Can you explain to me what this means? Mapping? Do you mean a one-to-one relation?
Question 04: What does (Xb)j=j−1+r=1∑jbr means? What (Xb)j means exactly? I didn't see this notation before in your solution.
Every term 2−a1−a2−⋯−an with a1+a2+⋯+an=N contributes 2−N to the sum. The number ∣A(N,n)∣ just counts how many different times 2−N occurs in the sum. Basically, I am collecting terms.
Yes. N∪{0}n is the set of n-tuples of nonnegative integers.
A mapping is another word for a function. Since I am saying that X is a bijection, you can read this whole bit as saying that X sets up a 1-1 correspondence between the two sets, if that is a more familiar expression to you.
Remember, X is a function between sets of n-tuples. If we start with the n-tuple b in B(N−21n(n−1),n), then Xb is the n-tuple (call it a) whose coefficients are aj=j−1+r=1∑jbr,1≤j≤n. I am using the standard notational abbreviation a to represent the n-tuple (a1,a2,…,an), and similarly for b. You need to check that Xb actually belongs to A(N,n) (that is the point of X) and that X has an inverse, so that every element of A(N,n) is equal to Xb for a unique element b in B(N−21n(n−1),n). You retrieve b from a=Xb by the formula bj={a1aj−aj−1−1j=12≤j≤n.
Thank you for your reply. Even if I accept that the function A(N,n) is carefully constructed, I still couldn't for the life of me figure out how you managed to pull the function of B(L,n) out of nowhere. That's some outstanding detective work!
Will read your solution again once my head is not about to explode.
There are standard combinatoric tricks for converting sets of strictly increasing n-tuples into sets of just increasing n-tuples, or even (as I did in this case) into sets of n-tuples of nonnegative numbers. Generally speaking, it is easier to identify what a set of n-tuples is doing if there are as few restrictions on the shape of its elements as possible; going from a1<a2<⋯<an to b1,b2,…bn≥0 was a big simplification, and it seemed to me worth the while (to be more accurate, I have read about how to derive the generating function for the partition function P, and this calculation is similar), to investigate what would happen if we made this conversion. As you can see, it worked!
Where did I get B(L,n) from? It was forced on me. The formula bj={a1aj−aj−1−1j=12≤j≤n. is one of those standard tricks for converting an n-tuple a with a1<a2<⋯<an into an n-tuple b with b1,b2,…,bn≥0. Check out what the requirement that the components of a add to N has on the components of b, and out pops the sum requirement of B(L,n), for L=N−21n(n−1).
Report section that integrate integrate R^2 sin(x^2+y^2) dx dy = diverge?
This integral fails to converge. If we attempt to evaluate the infinite integral using polar coordinates, we obtain ∫∫x2+y2≤R2x,y≥0sin(x2+y2)dxdy====∫021π∫0Rsinr2rdrdθ21π∫0Rrsinr2dr21π[−21cosr2]0R41π(1−cosR2) which has no limit as R→∞.
That we can calculate the infinite integral ∫0∞sin(x2+y2)dx=∫0∞(sinx2cosy2+cosx2siny2)dx=8π(siny2+cosy2) so that the iterated integral ∫0∞(∫0∞sin(x2+y2)dx)dy=41π exists (which may be the author's intention) is not relevant.
Wait.... I did your second method to get 4π. What's wrong with it?
Or why is there conflicting results when we do polar coordinates? Did I commit some sort of fallacy?
I ask the question back at you. If you are correct, and the integral exists, what is wrong with my calculation using polar coordinates?
The existence of the iterated integral does not imply the existence of the double integral. Particularly so in a case like this, where both stages of the iteration involve infinite Riemann integrals which do not converge in a Lebesgue sense.
This is one of the strengths of Lebesgue integration over Riemann integration; if the double integral exists, then the iterated integral must also exist, and equal the double integral. This works for infinite integrals.
With Riemann integration, we only have a similar result for integrals on bounded areas.
The downside of the strength of Lebesgue integration, theory-wise, is that not so many functions are Lebesgue-integrable. The function sinx2 is not Lebesgue-integrable on R, for example. However, Lebesgue integration theory can handle the Fresnel integrals just as well as Riemann integration, but at the expense of greater notational care. Instead of just writing ∫0∞sinx2dx=8π as Riemann integration would do, we would have to write R→∞lim∫0Rsinx2dx=8π. This is, of course, exactly what Riemann integration means, but it is more careful. When Lebesgue integration uses the integral sign, we can be much more confident about various manipulations that we can perform with it - for example, we have the ability to convert an iterated integral to a double one.
It took me several hours to understand what you're saying, but I don't think I understood it completely. Maybe because I did not (formally) learn what Lebesgue integral is.
This is the first time I hear it. I learn some basic calculus (mostly James Stewart textbook) back in the days, but I don't think I ever come across this line before. It sounds so counterintuitive. If what you said is true (no, I do not doubt you, I'm just shocked), then why didn't my math education ever taught me this basic calculus fact? I'm just venting at this point because I'm disappointed that I didn't learn calculus correctly. As a professor (I assume you are one), do you have any thoughts about this flawed education system?
You completely lost me at this part: "Convert ∫0∞⋯ to R→∞lim∫0R⋯". Aren't they same thing?! (Emphasized that I'm puzzled and shouting because I was caught off guard) Are you saying that at least one of the following equations is/are incorrect (or not necessarily true)? Equation one: ∫0∞∫0∞sin(x2+y2)dxdy=R→∞lim∫0R∫0Rsin(x2+y2)dxdy
Equation two: ∫0∞∫0∞sin(x2+y2)dxdy=R1→∞limR2→∞lim∫0R1∫0R2sin(x2+y2)dxdy
Or worse come to worst, could you either direct me to a useful textbook that explains all these? (I have Rudin but I felt that it's too complicated for a simpleton such as myself) Or (if it's possible) could you simplify what you said without getting too technical? Because I don't fully understand all the terms like "Lebesgue-integrable", "Lebesgue integration".
(Note: I don't find wikipedia helpful because they are not really engaging to newbies as such myself)
I appreciate your swift and detailed reply (every time). This site needs more people like you. Huge respect.
This is going to be huge.
Riemann integration in 1D is only really defined for bounded intervals. Infinite integrals are fudged by saying that a function can be integrated over an infinite interval if limit of the integrals over finite intervals exists as those intervals "tend to infinity" by any method possible. Thus we can say (in the Riemann sense) ∫0∞f(x)dx=R→∞lim∫0Rf(x)dx because the limit on the right basically encompasses all the ways that a finite interval can expand to include all positive numbers. However we cannot define ∫−∞∞f(x)dx=R→∞lim∫−RRf(x)dx, since there are more ways of letting a finite interval tend to infinity. We have to define ∫−∞∞f(x)dx=X,Y→∞lim∫−XYf(x)dx with both X and Y tending to infinity at the same time. This is different to the first statement, since it allows for the possibility of X and Y tending to infinity at different rates. For example, we would not want to claim that ∫−∞∞xdx=0 even though the integrals from −R to R all exist and are 0; the integrals from −R to 2R diverge to ∞, for example, and the integrals from −2R to R diverge to −∞.
If we move on to 2D integration, the first thing to note is that it is defined independently of 1D integration. The integral of a function of two variables over an interval is not defined as either of the iterated integrals. It is too long to write out the theory of Riemann integration in 1 or 2 dimensions here. It is a theorem that, for suitably well-behaved functions f, then the integral of f over a rectangle is equal to the iterated integral, but this is a nontrivial result.
Just as 1D Riemann integration is defined for finite intervals, and extended to infinite intervals by a bit of a fudge, infinite 2D integrals have to be obtained as limits in the same way. Without going into all the details, for the integral ∫0∞∫0∞f(x,y)dxdy to exist, all of the limits R→∞lim∫0R∫0Rf(x,y)dxdyR1,R2→∞lim∫0R1∫0R2f(x,y)dxdyR→∞lim∬x2+y2≤R2x,y≥0f(x,y)dxdy would have to exist and be equal, as well as any other limit of integrals over finite regions which eventually cover the whole first quadrant.
To be honest, people get lazy with Riemann integration, and forget about the technicalities of the limiting process. The limits are omitted from the integrals, and this can result in people assuming that certain deductions can be made which are not valid. A careful use of Riemann integration is always aware of the problems of infinite integrals.
That is where Lebesgue integration comes in. This is a different formulation of integration theory. It is very similar to Riemann integration, but is structured in a subtly different manner. This has a number of very positive consequences:
The downside is that fewer functions are Lebesgue-integrable. However, when a Lebesgue integration theorist talks about the integral ∫0∞f(x)dx she is not talking about a cute limit of finite integrals, but about something that is naturally defined on the whole infinite integral, and which therefore can benefit from all the advantanges I listed above.
If we look at an "identity" like ∫0∞f(x)dx=R→∞lim∫0Rf(x)dx then this can be read in diferrent ways:
I know this is jumping around a lot, but here is a pretty standard example of a function for which evaluating the iterated integrals does not help. The integrals ∫01(∫01(x2+y2)2x2−y2dx)dy∫01(∫01(x2+y2)2x2−y2dy)dx evaluate to ±41π; I leave it to you to determine which is which. Thus the two iterated integrals, although they both exist, tell us nothing about the integrability of this function over the square; this function is not even Riemann-integrable over the square.
This discussion about 0.99999... = 1
The problem with this issue is that we find it difficult to get out heads around limits of series. Since these can be pretty weird, that is forgiveable!.
Let us assume that the number a=0.9˙ exists. Since 0.999⋯9 (for any finite number of 9s) is less than 1, we can reasonably deduce that a≤1.
Suppose now that 0<b<1. If we consider the decimal expansion of b, we will eventually find a number after the decimal point which is less than 9. In other words, we will be able to write b<0.999⋯9, where the number on the right of this inequality has a finite number of 9s after its decimal point. Thus b<a.
In other words the number a is less than or equal to 1, but is greater than any number less than 1. This means that a must be equal to 1. In other words, if 0.9˙ exists, it must be equal to 1.
The only problem left is that of accepting that the number a exists in the first place. This is a nontrivial issue, and many mathematicians, starting with the ancient Greeks, have had problems with infinite limits. Consider such "paradoxes" as Zeno's paradox or the Achilles and the tortoise paradox.
If we are prepared to accept a mathematical system within which limits exist, then 0.9˙ will both exist and be equal to 1, and 0.3˙ will exist and be equal to 31. The existence of such limits is based upon what is called the Completeness Axiom of the Reals; while mathematicians might try to see what can be deduced in the absence of this Axiom, I would think that few disbelieve it.
It is interesting that students find the existence of 0.9˙=1 harder to accept than they do the existence of 0.3˙=31. Accepting that either of these numbers exists involves coping with the idea of an infinite number of entries in a decimal expansion; there is something about the fact that the identity 0.9˙=1 increases the digit before the decimal point that gets people agitated, while students at a very early age have few problems with 0.3˙=31. From a teaching point of view, therefore, observing that 3×0.3˙=0.9˙, and therefore that 0.9˙=3×31=1 is an argument which can content most students, at least until they can get to grips with the proper definition of the limit of a series.
(Printer is on) hahaahahah! Just kidding!
You wrote what Otto Brestcher wrote up here in the solution discussion but in a much clearer manner!
Good read! I didn't think of the analogy between Tortoise paradox and 0.9=1.
Completeness Axiom of the Reals? Woah! First time hearing of about this! Good read! Your comments are always so valuable! Brilliant should always send me a notification wherever you post anything in Brilliant!!
Can you explain why this happens? Or why mathematicians disagree about this (Is it because most people can't grasp new concepts like the controversy over Cantor's theory)? Does this issue still occur? And why?
It is not so much a case of mathematicians disagreeing about it; the Completeness Axiom (i.e. the existence of least upper bounds) is what defines the reals. Mathematicians can, however, be interested in determining how much can be deduced without assuming this Axiom. An increasing bounded sequence converges to its least upper bound; if we drop the Completeness Axiom, then we cannot have limits of any sequences except those which are eventually constant (and for which convergence is obvious). No limits means no calculus, and the whole field of Analysis goes down the tubes. What you are left with is mathematics that can be done within the rationals - Diophantine equations and the like. These are, of course, exceptionally rich fields of study.
It is interesting to note how Analytic Number Theory has found great usefulness of the real number system, and the machinery of analysis of real number, in the service of solving equations and problems about rational numbers.
In a similar manner, logicians are interested in what can be deduced if you remove Proof by Contradiction from your armoury (this is an example of what is called Constructivist Logic). This is important, since computers cannot prove by contradiction, and so results that can be proved without using Proof by Contradiction are essentially those that can be machine-proved
I don't know much about this subject matter (of Completeness Axiom) until recently, so forgive me if I sound ignorant, but is it in any way similar to accepting/rejecting the Axiom of Choice? Because I'm more familiar with that area as compared to Completeness Axiom.
Can you explain to me why these mathematicians prefer to restrict themselves by not assuming this axiom? Because I don't see the benefit of it at all. Is it because they want to find a more elegant solution to the necessary questions? Or because they think that this Axiom is not rigorous (highly doubt it)? Or something else entirely?
From your entire first paragraph, it sounds to me that what you're saying is that some mathematicians prefer not to use Completeness Axiom simply because of they are merely asserting one's preference, no?
Wait, isn't proof by contradiction super duper important? Why are logicians interested in handicapping themselves by restricting themselves by not using that commonplace method? What's the point of all these? Is there a benefit to this restriction?
(Advertisement).
The Axiom of Completeness of the reals has nothing to do with the Axiom of Choice, except that they are both Axioms! The Completeness Axiom says that any bounded nonempty set of reals has a least upper bound, something which is not true about rationals. The Axiom of Choice states that it is possible (essentially) to make uncountably many selections at the same time - if I have a collection of sets Sx, one for each real number x, the AofC says I can create a function f with domain R such that f(x)∈Sx for all x. This Axiom permits transfinite induction (induction on uncountable sets). Much high level mathematics would be impossible without this Axiom. However, it is still an Axiom, in that we have no proof that it is true. Mathematics with and mathematics without the AofC are both perfectly consistent; maybe one day someone will discover a subtle difference between what is possible in both systems which will determine which is true!
Mathematics is the business of investigating what is possible. It can be interesting to investigate what can be deduced if only some properties, or methods of argument, are assumed. That is why mathematicians study groups, for example. There are many cases where a number system is more complex than being just a group, but knowing its group-theoretic properties is nonetheless interesting. Alternatively, Algebraic Number Theorists can be quite happy studying the integers, and do not always need to think about the reals at all ( on the other hand, Analytic Number Theorists have found great value in real number theory in the study of integers).
As I said previously, if you are a computer theorist, you are not that fond of Proof by Contradiction, since your pet machines cannot use it. You are therefore extremely interested in what can be proved without it. Mathematical Logic, without Proof by Contradiction, is perfectly consistent and sensible; the only problem is that there are lots of elementary statements that are true, but cannot be proved, within it. Of course, Godel's Incompleteness Theorem proves that there are true, but unprovable, statements within any sufficiently complex system of logic. The difference is that we don't ever know in general which those statements are, whereas it is easy to find true statements which cannot be proved without Contradiction.
Thanks for your immediate reply!
I spent some time last week to read up all these axioms again and to be honest, I don't fully grasp most of them. Maybe due to the extreme lack of exercises in the (only) textbook that I bought. Or that there are so many complicated "consequences" of this Axiom (of Choice), like Zorn's Lemma, Hausdorff's Maximal Principle.
Do you have any good books on Set Theory? Because without sufficient exercises, I will (still) find it hard to grasp these concepts. Same goes for any books that teach The Completeness Axiom, because I don't think wikipedia paints a good picture for newbies to understand it, nor can I find any good YouTube videos for it, and most of the other explanations that I can find appears very handwavy.
The classic text is P.R. Halmos' "Naïve Set Theory", but it is pretty tough-going.
Oh I bookmarked this and I forgot to reply.
Are you referring to this one? Okay! ordered!
Pretty tough-going? Even for you? Well, I guess I can only master this topic in the next decade.
Thanks for the dialog!
That's the one. It should get you started.
Thank you! You're really really helpful!!
Discussion abot Dirichlet function under Manzoor's solution
Can you list some EASY functions to be listed in this page?
Please don't link to wikipedia again, it's hard to understand most of what they're saying.
I would start with a discussion of multiplicative functions, namely functions f on N such that f(mn)=f(m)f(n) whenever m,n are coprime. Examples of these are legion! The identity function, the number of divisors function, the sum of divisors function, the Euler totient function, the Mobius function, the many functions that lurk in Brilliant Number Theory problems...
Then introduce the Dirichlet convolution ⋆, with the observation that ⋆ is commutative and associative, and the result that the convolution of two multiplicative functions is multiplicative. Since μ⋆1 is multiplicative, it is easy to show that it is equal to δ, where δ(n)={10n=1n≥2, is the multiplicative identity for the Dirichlet convolution. This is enough to show that if f=g⋆1, then g=f⋆μ, which is the Dirichlet inverse result.
At this point easy counting of HCFs shows that ϕ⋆1=id, which gives us ϕ=μ⋆id, and we are away...
Don't introduce the Dirichlet convolution out of nowhere; put it in the context of multiplicative functions, with the convolution as a useful device for creating new multiplicative functions from old, and the reason for its existence/usefulness is more obvious.
Woah that's super detailed! Let me see if there's a way to improve that wiki page (as it is already written up).
Please tell me you've published books/autobiography before, I want to read everything a god (you) has to say!
Mark's alternative solution to integrate(0 to pi/2) (ln cos x)^2 ( (ln (cos x))^2 -6x^2 ) dx
Can you please post your solution here so I can print and frame it? (Pretty please)
Relevant problem.
My method was hard work (but I was using Mathematica to perform the algebra). I differentiated 21B(21(u+1),21) four times, and put u=0, to obtain a=∫021π[ln(cosx)]4dx=48019π5+41π3(ln2)2+21π(ln2)4+3πln2ζ(3). I then differentiated the (manipulated) W&W formula twice with respect to u, putting v=0, to obtain ∫021π[ln(cosx)]2cosvxdx=12v31[12+2π2v2−3π2v2cosec2(21vπ)+3v2(H−21v+H21v+2ln2)2]sin21vπ. I then differentiated this twice with respect to v and put v=0 to get the second integral. b=∫021π[ln(cosx)]2x2dx=144011π5+241π3(ln2)2+21ln2ζ(3), and a−6b==(48019π5+41π3(ln2)2+21π(ln2)4+3πln2ζ(3))−(24011π5+41π3(ln2)2+3πln2ζ(3))21π(ln2)4−1601π5.
Aditya's method of using the W&W formula is much better than mine, since the integral he obtains is much easier to differentiate multiple times. I made the mistake of working out the first integral quickly, and then focused on getting the second integral on its own, rather than obtaining the two together!
My concern with Aditya's method is not that it was not a good one (it was!), but that the key formula he wanted to use needed more proof than he gave it.
Hey, The problem with my method is that I have used the binomial theorem to prove the result for only positive integral values of u, right? Differentiation is not allowed in that case, and it was silly of me to overlook this. But, I am now editing the solution so that I use the binomial series, which converges for the given conditions, i.e. (1+z)u=k=0∑∞(ku)zk, where (ku) is the generalized binomial coefficient. Since ∣z∣=1 and u>−1 the series converges for all u taken, right? I hope this fixes my incomplete solution. Anyways, thanks for showing another amazing way how to do it!
It's still not going to be straightforward, since the series does not converge at x=21π when u<0 (and you want to differentiate at 0, so have to handle what is happening for negative u). Showing that you can integrate the series term-by-term is not automatic.
The series does converge uniformly in x for all u>0, and so we can calculate the integral for all u>0 by your proposed method. We can then differentiate four times, and let u tend to 0, using continuity in u of all the expressions involved, thanks to the DCT.
….
Top of the page
(Need to convert to GiF)
Above shows a Tic Tac Toe game. Tic tac toe is a 2-player game, where the players X and O take turns filling a 3×3 grid. Whoever places three respective marks in a vertical, horizontal, or diagonal manner will be the winner of the game. A draw is obtained whenever nobody wins. Like chess, a player needs good observation, tactics, and strategy to win a game of tic tac toe.
This wiki page is designed to help readers understand how to identify and avoid forks when playing a Tic Tac Toe.
In this page, we are making it consistent by displaying X as the first player and O as the second player. The centre will be the centre of the grid (obviously). The corner will be the corner of the grid and the edge will be the squares adjacent to the corner and centre.
To avoid repetition of explanation, user should be able to identify puzzle that are similar simply because of rotations. For example, the pictures on the left are equivalent to the pictures on the right respectively.
This wiki page does not intend to list down all possible scenarios for forking, so you should take this page in the spirit that you should not memorize all the forking configurations, but ways of identifying and avoiding forking formations.After reviewing this page, you should be to know that if both played optimally, it will always forces a draw.
Contents
1.Basic Rules and definition
...... 1.1 Rules
...... 1.2 Thinking one step ahead (see pic)
Question: Where should player 2 make?
Answer: Obviously top left, else you lose the game.
(The colors were meant to color the entire cell)
…. 1.3 THREE sets of opening:::
Noting the symmetries, We first note that there will be 3 opening from player X... he/she can put in center/edge/corner.
The picture above should be pretty much self explanatory for the terms: Edges, Middle, Corners.
.... 1.4 Define (adjacent corner/furthest corner/adjacent edge/farther edge/ etc)
I can't really expressed in words (so I need someone else to do it, possibly Brock because he's the best)
2 Identifying forks
Define what fork is>>> you attack your oppoent in two positions and he/she can only block one and thus you can win on the next move). In other words, with a fork, you win the game. The picture in the introduction shows that Player 1 makes a fork in his third move, this is because he identified that Player 2 made a mistake on their very first move. This part explains how to see 2 or more steps ahead instead of just (one step ahead) in (section 1.2) and thus you can either avoid or create forks.
We will not show all possible forks if player 1 starts at an edge because there is too many cases, however, will be give a couple of examples to elaborate what we're saying.
Case 1:
(needs rephrasing) As shown for the introductory example, if player 1 places his X in the corner, and player 2 places his O in the opposite corner, player 1 forces player 2 to place his O in bottom middle edge and thus setting up a fork in his third move. In other words, player 1 should identify that player 2 places his O in the wrong place (opposite edge to X) from the start. Synopsis: If player 2 places his X in a corner, and player 2 places his O in the opposite corner, player 1 wins. So as player 2, you should not place your starting O that is directly opposite corner to player 1's corner O.
Case 2:
(needs rephrasing) Refer to the Gif above. The setup is as follow. Player 1 places X in the middle, player 2 follows up with placing O on the edge, then player 1 reacts by placing his X to the adjacent corner to O (either left or right, does not matter), thus forcing player 2 to defend in the bottom corner, lastly player 1 can make a fork by placing X in the other bottom corner thus attacking at two fronts.
So as player 1, if you start in the middle, you should identify that you can mark a fork once player 2 places his first move on the edge; and as player 2, to avoid this fork, that is, if player 1 starts in the middle, you should always place your first move in the corner.
Case 3:
(Needs rephrasing). Refer to the Gif above, the setup is as follow. Player 1 places his first move in a corner and player 2 places his first move in the middle follow by player 1 places his second move in the opposite corner. In this case, it is player 2's mistake of placing his second move in another corner as he not only forces player to place their X in the remaining corner but also inadvertly causes player 1 to create forks.
So as player 2, for the setup as such this, you should place your second move in an edge instead of a corner.
Case 4:
(Needs rephrasing). Refer to the Gif above, the setup is as follow. Player 1 places his first move in the corner, and player 2 follows up by placing their first move in the adjacent edge next to player 1. Then player 1 place his next move on the middle and thus forces player 2 to block at the bottom corner, then player 1 forks by placing his third X in the other bottom corner.
So as player 1, if you start at a corner, and player 2 follows up by placing his O in the adjacent edge to you, you should identify that you can make a fork, as for player 2, you should not place your first move on the adjacent edge if player 1 makes his first in the corner.
Case 5:
(Needs rephrasing). Refer to the Gif above, the setup is as follow. Player 1 places his first move in the corner, and player 2 follows up by placing their first move in the opposite edge next to player 1. Then player 1 place his next move on the middle and thus forces player 2 to block. This both forces player to block and creates a fork like in Case 4.
So as player 1, if you start at a corner, and player 2 follows up by placing his O in the opposite edge to you, you should identify that you can make a fork, as for player 2, you should not place your first move on the opposite edge if player 1 makes his first in the corner.
Case 6:
(Needs rephrasing). Refer to the Gif above, the setup is as follow. Player 1 places his first move in the corner, and player 2 follows up by placing their first move in the adjacent corner next to player 1. Then player 1 place his next move in the opposite corner from his first move. This forces player 2 to block at the middle. and this in turn makes player both blocks at the remaining corner and creates a fork.
So as player 1, if you start at a corner, and player 2 follows up by placing his O in the adjacent corner to you, you should identify that you can make a fork, as for player 2, you should not place your first move on the adjacent corner if player 1 makes his first in the corner.
Case 7:
(Needs rephrasing). Refer to the Gif above, the setup is as follow. Player 1 places his first move in an edge and player 2 places his first move in the adjacent edge. then player 1 follows up by placing his second move to the adjacent edge of player 2's move.
So as player 2, for the setup as such this, do not place your second move to the opposite corner to your first move.
Case 8:
(needs rephrasing) Refer to the gif above, the setup is as follows. This is the extension to the introductory example. If Player 1 places in the middle, then player 2 places in the corner, then player 1 follows up by placing his move in the opposite corner to player 2. Player 2 then makes places his O in the farther edge from his first move, then player 1 can make a fork by placing his move in the opposite corner to player 2's last move.
So as player 1, you should identify that if player 2 makes his 2nd move as such, then you can make a fork. And as player 2, you should not place your 2nd move there.
Case 9:
(needs rephrasing) Refer to the gif above, the setup is as follows. This is the extension to the introductory example. If Player 1 places in the middle, then player 2 places in the adjacent edge to his first move, then player 1 is both force to block player 2 and creates a fork.
So as player 2, do not place your 2nd move there.
Case 10:
(needs rephrasing) Refer to the gif above, the setup is as follow. If Player place on an edge and player 2 follows up by placing on an adjacent edge. then Player 1 should be able to identify a fork by placing his 2nd move in the middle thus forcing player 2 to block and lastly, player 1 places his move in a corner that is both adjacent to both players' first move.
So as player 1, if you place your first move in the edge, identify a fork can occur if player 2 place their first move on the adjacent edge, and as player 2, do not place your first move on an adjacent edge to player 1's first move as on an edge.
Case 11:
(needs rephrasing) If player 2 places his 2nd move in the middle or the opposite corner from his first move, then player 1 is forced to block and creates a fork. So as player 2, for this setup, your next move is to place your O in the respective row or column to force a move from Player 1.
Case 12:
(needs rephrasing) This is an extension to case 6. If player 1 places at an edge and player 2 then places at an adjacent corner to player then player 1 places his 2nd move on a corner that is adjacent to the row to player 2's first move but not adjacent to player 1's first move, then if player 2 don't place their 2nd move in any of the colored region, player 1 can forks at (blue).
Case 13:
(to be filled in) so as player 1, don't place 2nd move there. and as player 2, attack diagonally.
3 Algorithm to play:
3.1 Based on the few case mention, we can summarize it as:
Suppose P1 choose corner, then P2 MUST choose middle, else he lose (see case 1,4,5,6). > then for a possibility of 2nd step ahead, player 1 must place in opposite corner, then P2 must avoid by placing his 2nd move on an edge (see case 3) >> forces draw.
Suppose P1 choose middle, then P2 MUST choose corner, else he lose (see case 2) > then for a possibility for a 2nd step ahead, P1 chooses a corner opposite to P2 first move > then to prevent forking >>> then P2 must place his 2nd move in another corner (see case 8 and 9) >> forces draw.
Suppose P1 choose edge, then P2 can either:
............. choose middle > P1 choose adjacent edge to 1st move > P2 choose any of the 3 corners that are adjacent to P1's X >> forces draw
............. choose middle> P1 choose furthest corner to 1st move > P2 removes fork at one of three forks (2 adjacent corners of P1's first move, or adjacent edge to P1's first and 2nd move) ............. choose adjacent corner, then P1 should not choose furthest edge from P1's first move else lose (see case 10).
3.2: To simplify the algorithm, we can adopt the following strategy
Or
Or
In other words, whoever does not follow / "break off" from any of the first two algorithm loses, and players should beware of potential forks for "starting in P1 starting in edge".
1) Converting to (ax^2+bx+c==0 mod p ) <==> x^2 == a mod p, show that all quadratic congrunces can be simplified into this form.
2) Define what QR and QNR are and it's up to (p-1). so x^2 =a mod p has either 0 or 2 solutions for p \not \div a.
3) Listing out the quad residues of 13 (like in practice) show that (x^2=5mod13) has no solution
4) table of quadratic residues (1 to 25) maybe?
5) statement of Euler's Criterion, Solve x^2=5mod13 by Euler's criterion
6) Multiplicative Property statement: x^2=a mod p, x^2=b mod p, x^2=ab mod p (1 or 3 is true)
7) Write an example to prove the multiplicative property. Apply Fermat's two square identities maybe?
8) Extension from above::: listing out the quad residues of 13: (1/13) = (3/13) = (4/13) ... = (12/13) = 1, (2/13) = (5/13) = (6/13) = ... =(11/13) = -1
9) briefly define Legendre Symbol, and solve x^2=5mod13 by Legendre Symbol, some properties: a==b mod p, (a/p) =(b/p)
10) briefly define LOQR, List of some property of LOQR , solve x^2=5mod13 by LOQR
11) Mention without proof: there is precisely (p-1)/2 positive QR and QNR of odd p.
12) State theorem of Gauss Lemma and solve (x^2 = 5 mod 13) by Gauss Lemma
13) Briefly touch about: (-1/p) =\begin{cases} (1 if p==1 mod 4), (-1/p) = (-1 if p==3 mod 4), alternatively, (-1/p) = (-1)^((p-1)/2)
14) Briefly touch about: (2/p) = \begin{cases}(1 if p== \pm 1 mod 8), (2/p) = (-1 if p==\pm 3 mod 8), alternatively, (2/p) = (-1)^((p^2-1)/8)
15) Briefly touch about: (3/p) = \begin{cases} (1 if p== \pm 1 mod 12) , (-1 if p==\pm 5 mod 12)
16) Summarize by showing that we can convert to familiar notations like (-1/q), (1/q), (2/q), (3/q). And show a relevant example. E.g: (29/53) = (53/29) = ... =(2/29)(3/29) =. ... 1
17) mention that if m = any perfect squares less than p , then (m/p) = 1 (same thing as: mentioning that (4/p) = 1 which seems obvious)
18) If A is a QR, then so is k^2 A (yes, it’s duh, but it’s a basic property that’s good to know) E.g. 8/p =?
19) Find a suitable example which can utilize all (or most of the techniques): e.g: x2≡60(mod677). (answer = there exist solution)
20) Briefly touch about: Composite modulo: x2==a(modp)n has solution when (a/p) = 1
21) No proof: Some properties of composite modulo: x2≡a(mod2) always has solution, x2≡a(mod4) has solution of a≡1(mod4), x2≡a(mod2n) for n>=3 and a ==1 mod8.
22) Theorem for composite modulo: Let n = 2^k0 x product(p1^k1 x .. x pr^kr ) for n>1 with gcd(a,n) = 1, then x^2==a mod n is solvable IFF (a/pi) = 1 for i=1,2,3,...,r, and a==1 mod4 if 4 divides n but 8 don't divide n, a == 1 mod 8 if 8 | n.
23) Relevant example for (23) Solve the congrunce x^2==31 mod 114; determine number of solutions of congruences x^2==3 mod (11*23)^2, x^2 ==9 mod(8 * 3 * 25)
24) An example or two: Find the least quadratic nonresidue of modulo (p). Some p.
25) Use the example (37 | (x^2-31x-34) ) <<< completing the square. Main purpose = identify what residue we should be looking out for. That is x2=a(modp) ==> a = ?
Log in to reply
Possible flow chart::::
link.
Log in to reply
link