• 29 Posts
  • 342 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


  • Not sure how much you’d care, but I came back to this and found (by hand) a function which closely approximates the solution - it’s not exact but it’s also not super far. Graph.

    I think I could also solve the differential equation by hand at this point (getting the same solution as before) - I haven’t, but I’m pretty sure I could if I wanted to for whatever reason. I’m doubtful it’s possible to get an exact solution in terms of just x but if you ever manage I’d love to see how.


  • Making one additional note here, because of your parenthetical message, letting you know that my short backlog of problems I found interesting has actually run dry with the last one I posted - so my series is unfortunately on hiatus. I’ll still post if I come across other interesting problems but that honestly doesn’t happen too often - even the ones I have posted were collected (and saved by a friend) over several years.





  • solution

    I’m sure there’s some short elegant solution here that uses beautiful vector math. Instead I went for the butt ugly coordinate geometry solution.

    Let u = <a, b> and v = <c, d>

    See diagram. WLOG, assume u has a smaller angle than v. We can find cos(θ) by constructing a right triangle as shown, and by finding a new vector w, in the same direction as u, which has the correct length to complete our right triangle. Once done, we will have cos(θ) = |w| / |v|.

    Let’s consider each vector to be anchored at the origin. So we can say u lies on the line y = (b/a)x, and v lies on the line y = (d/c)x. To find w, let us first find z - the third side of the triangle, which we know must be perpendicular to u, and pass through (c, d).

    The line perpendicular to y = (b/a)x and passing through (c, d) is y = (-a/b)(x-c) + d. So this is the line containing the third side of our constructed triangle, z. To find w, then, let’s find its point of intersection with y = (b/a)x, the line containing w.

    (b/a)x = (-a/b)(x-c) + d → Setup

    (b/a)x = (-a/b)x + (ca/b) + d → Distribute on right

    x(b/a + a/b) = (ca/b) + d → Add (a/b)x to both sides, factor out x on left

    x(a² + b²) = ca² + dab → Multiply both sides by ab

    x = (ca² + dab) / (a² + b²) → Divide both sides by (a² + b²)

    x = (ca² + dab) / |u|² → a² + b² is |u|²

    y = (b/a)x = (b/a)(ca² + dab) / |u|² → Plug solution for x into (b/a)x, don’t bother simplifying as this form is useful in a moment

    So w = <(ca² + dab) / |u|², (b/a)(ca² + dab) / |u|²>. Now we need |w|.

    |w| = sqrt((ca² + dab)² / |u|⁴ + (b²/a²)(ca² + dab)² / |u|⁴) → Plugging into normal |w| formula

    |w| = sqrt(((ca² + dab)² / |u|⁴) * (1 + b² / a²)) → Factor out (ca² + dab)² / |u|⁴

    |w| = (ca² + dab) / |u|² * sqrt(1 + b² / a²) → Pull (ca² + dab)² / |u|⁴ out of the root

    |w| = (ca² + dab) / |u|² * sqrt((a² + b²) / a²) → Combine terms in root to one fraction

    |w| = (ca² + dab) / |u|² * |u| / a → Evaluate sqrt

    |w| = (ca + db) / |u| → Cancel out |u| and a from numerator and denominator

    So this is the length of our adjacent side in the constructed right triangle. Finally, to find cos(θ), divide it by the length of the hypotenuse - which is |v|.

    cos(θ) = |w| / |v| = (ac + bd) / (|u||v|)

    And note that ac + bd is u•v. So we’re done.

    cos(θ) = u•v / (|u||v|), or |u||v|cos(θ) = u•v.

    Note that since this formula is symmetric with respect to u and v, our assumption that u’s angle was smaller than v’s angle did not matter - so this should hold regardless of which is larger.


  • In probability, two events are said to be independent if one event happening has no effect on the probability of another event happening. So coin flips, as an example, are independent - because when you flip a coin and get Tails, that doesn’t affect the probability of the next coin flip also coming up Tails.

    So in this context, asking if A and B are independent is asking: Does knowing the game lasts at least four turns change the probability of winning? And similarly for A and C. Does knowing the game lasts at least 5 turns change the probability that the game will end in a victory?

    Rest of response

    To be clear about your other answers, saying P(B) = 1/16 and P(C) = 1/32 are not correct - I was saying if you adjusted your formula, from 1/2^n to 1/2^(n-1), then your answers would be correct. So the probability of the game lasting at least four turns is 1/2^(4-1) = 1/2^3 = 1/8, and the probability of it lasting at least 5 turns is 1/2^(5-1) = 1/2^4 = 1/16.

    But I think you were more asking about why this didn’t affect your win rate - that’s because there’s a subtle difference between “making it to the nth round” and “having the game end on the nth round” - and that difference is that once you make it to a round, you then have a 1/2 probability to end the game - which makes the probability of ending on the nth round the 1/2^n you used.


  • response

    This is the correct answer, although P(A|B) should actually be 2/3 rather than 7/12 - I think you meant 1/2 + 1/8 + 1/32 + …?

    The reasoning is good, either way. Since past flips won’t affect future flips, if the player has made it to turn 5, an odd turn, then their future prospects are no different than they were on turn 1, another odd turn - so A and C are independent. Similarly, A and B are dependent because your chances of winning and losing effectively flip: If you’ve made it to an even turn, then you now win if it takes an odd number of flips from there to get Heads.

    So it should be an almost paradoxical-seeming situation: You win 1/3 games overall, you win 2/3 games that make it to turn 2, you win 1/3 games that make it to turn 3, you win 2/3 games that make it to turn 4, 1/3 that make it to turn 5, etc.


  • Games are always played to completion, though if you wanna make it (barely) more challenging you can add in a 5% chance for both players to get bored and give up on each round (before flipping), leading to a loss. Though it seems unlikely - after flipping a quarter 20 times and getting Tails every time, I’d be inclined to keep flipping if anything.

    response

    These are correct. It is possible to reason out which of B and C is independent of A without going into the numbers.


  • response

    This is close, but you’ve got an off-by-one error where you calculate the probability of making it to the nth round as 1/2^n. Consider that that would imply you have a probability of 1/2 for the game to make it to round 1, or 1/4 to make it to round 2, etc - it should be 1/2^(n-1). Correcting this would give you the correct answers for P(B) and P(C).

    As for the dependence question, I’m not sure I followed your arguments there - but saying both are dependent is not correct.





  • solution

    This series can be rewritten as:

    2 * lim (n → ∞) Σ (k = 0 to n) k / 2^k

    The final term of this series will of course be n / 2^n. So let’s try to write each term with 2^n as its denominator:

    2 * lim (n → ∞) Σ (k = 0 to n) k * 2^(n-k) / 2^n → Multiply by 2^(n-k) / 2^(n-k)

    = lim (n → ∞) 1/2^(n-1) Σ (k = 0 to n) k * 2^(n-k) → Factor the denominator out of the sum, combine it with the 2 from in front of the limit

    What will this sum look like, though? Its first few terms are:

    0*2^n + 1*2^(n-1) + 2*2^(n-2) + … + (n-1)*2^1 + n*2^0 (my construction below works backwards, from the last term here to the first)

    This can be written as a double sum: Σ (k = 0 to n-1) Σ (j = 0 to k) 2^j (which is 2^0 + 2^0 + 2^1 + 2^0 + 2^1 + 2^2 + 2^0 + 2^1 + 2^2 + 2^3 + …, so we have n copies of 2^0, n-1 copies of 2^1, etc)

    So we have:

    lim (n → ∞) 1/2^(n-1) Σ (k = 0 to n-1) Σ (j = 0 to k) 2^k

    = lim (n → ∞) 1/2^(n-1) Σ (k = 0 to n-1) (2^(k+1) - 1) → Evaluate innermost sum, just sum of powers of 2 from 0 to k

    = lim (n → ∞) 1/2^(n-1) * (2^(n+1) - 2 - n) → Evaluate next sum, which is powers of 2 from 1 to n, and don’t forget the -1 part

    = lim (n → ∞) (2^(n+1) - n - 2) / 2^(n-1) → Combine leftover terms into one fraction in the limit

    = lim (n → ∞) 4 - n/2^(n-1) - 1/2^(n-2) → Distribute denominator to each term

    = 4 - 0 - 0 → First term is just 4, other terms go to 0

    = 4

    I’ve been checking and thinking about this at least a bit each day, and today’s the first day a solution came to me.

    Edit: Checking your solution now, it’s obviously much nicer than this - but at this point I’m just happy I solved it at all. Obviously I knew it was 4 from just calculating partial sums from day 1, but proving it proved challenging.



  • I’ve posted my solution.

    The question of why the bounds swap isn’t too relevant to the problem, since there are no maxes or mins below x = 1 - but the reason they swap is because numbers between 0 and 1 get larger with smaller powers, and smaller with larger powers. Think of how 0.5^2 = 0.25 - so the more a (positive) power is biased toward “rooting” a number in this interval (biased toward 0), the larger the result will be - so the minimum of g now produces a max value, and the max of g yields a minimum value.

    Edit: If it helps, you can also think of numbers between 0 and 1 as being numbers larger than 1, but with a negative power. So something like 0.5^2 is equivalent to (2^-1)^2 = 2^-2. So larger powers become lesser powers that way.

    Really, it’s for the same reason the graphs of 2^x and 0.5^x are reflections of each other


  • Solution

    For both of these, it’s sufficient to consider only sin(x)^sin(x). It stands to reason that the maximums will happen when sin(x)^sin(x) is at its maximum value, and its minimums will happen when sin(x)^sin(x) is at its minimum value. The x^ part really just makes the function prettier to look at. The function lacks any continuity when sin(x) is negative, so we can consider just the regions where sin(x) is positive.

    For the maximums, this is very easy: The largest sin(x) can be is 1, and so the largest value for sin(x)^sin(x) is also 1. So the monotonically increasing function which passes through each local maximum is y = x^1, or just y = x.

    For the minimums it’s a bit trickier: The smallest sin(x) can be is 0, and while 0^0 is indeterminate, it happens to approach 1 in this case - our minimum is somewhere else (side note: sin(x)^sin(x) approaching 1 for these values is why our max function also exactly bounds the flailing arms of the function). So we turn to derivatives. We can do this using either sin(x)^sin(x) or x^x - I chose to use sin(x)^sin(x) here and x^x (sort of) in the extra credit. Using sin(x)^sin(x) here has the added benefit of also showing the maximums are when sin(x) = 1.

    y = sin(x)^sin(x)

    y = e^(ln(sin(x)) * sin(x))

    y’ = sin(x)^sin(x) * (cos(x)/sin(x) * sin(x) + ln(sin(x)) * cos(x)) → Chain rule, product rule, and chain rule again

    y’ = sin(x)^sin(x) * (cos(x) + ln(sin(x)) * cos(x)) → Simplify

    y’ = sin(x)^sin(x) * cos(x) * (1 + ln(sin(x))) → Factor

    We can set each of these factors to 0.

    sin(x)^sin(x) will never be 0.

    cos(x) will be 0 when x = π/2 + 2πn (2πn instead of πn because we’re skipping over the solutions where sin(x) is negative). These are the maximums, because when x = π/2 + 2πn we have sin(x) = 1.

    1 + ln(sin(x)) = 0

    ln(sin(x)) = -1

    sin(x) = e^-1 = 1/e

    And that’s actually sufficient for our purposes - we could of course say x = arcsin(1/e), but our goal is to calcualte the value of sin(x)^sin(x). Since sin(x) = 1/e, we have sin(x)^sin(x) = (1/e)^(1/e). So our monotonically increasing function which hits all the minimums is y = x^((1/e)^(1/e)), or: y = (e-th root of e)-th root of x.

    Graphed solution visible here.

    For the extra credit, we can do the same basic idea: Find the maximum of |sin(x)|^sin(x). Since we know we’re in a region where sin(x) is negative, we can model this as x^-x, and the solution will be valid as long as x is something between 0 and 1, within sin(x)'s positive range.

    y = x^-x = e^(ln(x) * -x)

    y’ = x^-x * (-1 - ln(x))

    x^-x won’t ever be 0, and -1 - ln(x) will be 0 when ln(x) = -1, or x = 1/e again, same as before. So our maximum value happens at (1/e)^(-1/e), which simplifies to e^(1/e). In other words, our new maximum function is x^(e^(1/e)), or x to the power of the e-th root of e. Graph of all three solution functions

    And, finally - I mentioned that we should also be able to expand the function to its less-defined regions using y = x^-|sin(x)|^sin(x). I don’t care enough to post the full work here, but the reciprocal of the three answer solutions above are what works here: y = 1/x, y = 1/x^((1/e)^(1/e)), and y = 1/x^(e^(1/e)). Graph. Note that the upper function here only applies in regions where the original function is already defined, and so the other parts of this extension are invalid - you can toggle the main function at the top off to see only the parts that truly apply.

    Nothing particularly interesting happens if you extend things to values of x less than 0 - the graphs (more or less) just reflect across the y-axis.

    Bonus graph with everything + phase shifts and pretty colors


  • No worries! The idea isn’t too bad - it all comes down to that you can’t take the square root of a negative number, but that you can take the cube root of a negative - or in general, you can’t take even roots of negatives, but you can take odd roots of negatives. Remember that if your exponent is a/b, it’s like taking a power of a, and a root of b. So something like (-25)^(1/2) is asking for the square root of -25, which isn’t defined (since we’re not considering complex numbers in this problem). On the other hand, something like (-125)^(1/3) is perfectly fine: It’s just -5, since -5 * -5 * -5 = -125. That’s why negative numbers to exponents which can be written as a rational number with an odd denominator are defined, regardless of the sign of the exponent. If a is negative and a^b is defined, then |a^b| will always equal |a|^b - like how |(-125)^(1/3)| = |-125|^(1/3) = 125^(1/3) = 5

    When you write that rational exponent, its numerator will be even or odd, too. If the exponent is even, then the answer will end up positive: Negative numbers have negative odd roots, and negative numbers to even powers are positive. On the other hand, if the exponent is odd, the answer will end up negative: Negative numbers to odd powers are still negative. So if your expression is x^x and is defined, and x is negative, you know it’s going to equal either |x|^x or -|x|^x. Here’s an example of each case:

    (-1/3)^(-1/3) = -1.4422495 = -(1/3)^(-1/3) (a case for x^x = -|x|^x)

    (-2/3)^(-2/3) = 1.31037 = (2/3)^(-2/3) (a case for x^x = |x|^x)

    And that’s why the two extension functions I mention wrap the first sin(x) in parenthesis, and swap its sign around. The actual y = x^(sin(x)^sin(x)) has infinitely many points that lie on both of those functions (within the intervals where sin(x) is negative - so within the intervals where desmos seemingly graphs nothing) already - desmos just isn’t graphing them, and these new functions are filling in the gaps so everything gets drawn. The gaps happen for irrational numbers and for any rational number with an even denominator - but if you actually put a point at every defined value, you would get a graph that looks like this (only taking principle real roots). If you allowed graphing both real roots when available, you’d get this. If you wanted to go to negative x-values too you’d get this.

    All the points you see on that final graph correspond to (x, y) values that do satisfy just the original equation, y = x^sin(x)^sin(x) - them not displaying in the original graph is a combination of desmos not graphing individual discontinuous points, and desmos only graphing primary roots in equations set up as functions. So it’s a case where those points exist on the graph, but lack any continuity: It’s just infinitely many non-connected points, that are nonetheless dense enough that you could zoom in forever and still always see infinitely many defined points in any given region. The key is that between any two real numbers, there are always infinitely many rational numbers, infinitely many of which can be written with odd denominators - so there are points everywhere, just not everywhere enough.


  • sin(x) is defined everywhere - it’s sin(x)^sin(x) that has undefined spots: A negative number can only be raised to a negative power if that negative power can be expressed as a rational number with an odd denominator. That doesn’t describe most real numbers, so sin(x)^sin(x) is undefined for most values that make sin(x) negative. It does still describe infinitely many values, though - and the extra credit graph kinda just connects them up.