{ localUrl: '../page/e_ROI_Lemma.html', arbitalUrl: 'https://arbital.com/p/e_ROI_Lemma', rawJsonUrl: '../raw/88p.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'e_ROI_Lemma', edit: '18', editSummary: '', prevEdit: '17', currentEdit: '18', wasPublished: 'true', type: 'wiki', title: 'ε-ROI Lemma', clickbait: 'The fundamental lemma for analyzing logical inductor properties.', textLength: '23001', alias: 'e_ROI_Lemma', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AlexAppel', editCreatedAt: '2017-04-22 05:48:49', pageCreatorId: 'AlexAppel', pageCreatedAt: '2017-04-21 01:37:13', seeDomainId: '0', editDomainId: '812', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '12', text: 'The $\\varepsilon$-Return-on-Investment Lemma isn't strictly *necessary* for analyzing logical inductor properties, but since it crystallizes a commonly-applied strategy into one easily-applicable unit, it's helpful to have as a lemma.\n\nYou may remember that the fully general strategy for proving that a logical inductor %note: sequence of values for math sentences that is inexploitable by any poly-time trader.% has a property is to assume that it does *not* have the property, and then craft a trader %note: a poly-time algorithm that, each day, outputs an algorithm that operates on the market prices for that day and outputs a linear combination of "shares" in math sentences.% that could earn infinite money, with bounded risk. The definition of a logical inductor says that this can't happen, so the logical inductor must obey the property you want.\n\nMany desirable properties involve sequence prediction. Thus, to show that the sequence is predicted well, the general strategy is to take an efficiently-emulatable sequence of traders/"trading firm" %note: a poly-time algorithm that emulates other poly-time algorithms/"employees" and in some way combines their outputted trades into one of its own.%, have each "employee" follow a certain element of the sequence with orders to buy low and sell high, show that infinitely many "employees" will earn guaranteed cash from that, and finally show that the "trading firm" can get infinite money from that.\n\nThe $\\varepsilon$-ROI Lemma formalizes part of that argument by abstracting away the details of the "employees" and just assuming they can all turn an $\\epsilon$ fraction of the shares moved by that employee into guaranteed value, and showing how to exploit that to earn infinite money unless the sequence of "total shares moved" asymptotes to 0.\n\nThen, all a later proof has to do is show that, yes, every "employee" can do that, and no, the sequence of "lifetime shares moved" doesn't go to 0. Combining that with this lemma shows that infinite money can be made, but that's impossible, so the assumption that \\[desirable property] doesn't hold is wrong. This speeds up future proofs once the overhead of the $\\varepsilon$-ROI Lemma is out of the way.\n\n#$\\varepsilon$-ROI Definition#\n\nA trader has $\\varepsilon$-ROI if its value in every "legitimate" world is more than $\\varepsilon$ times its magnitude %note: total number of shares traded over its lifetime%. So, if the trader trades 5 shares over its lifetime, and has 0.1-ROI, then its value, as assessed by every "legitimate" world, will be 0.5 or greater.\n\nTo put in math form, that's \n\n$\\forall\\mathbb{W}\\in\\mathcal{PC}(\\Gamma):\\lim_{n\\to\\infty}\\mathbb{W}\\left(\\sum_{i\\le n}T_{i}(\\overline{\\mathbb{P}})\\right)\\ge\\varepsilon||\\overline{T}(\\overline{\\mathbb{P}})||_{mg}$ \n\nIf you're reading through the proofs from scratch, you'll definitely want to have [87n] open in another tab, to refer back to as needed, until you can fluently read the LaTeX and automatically know the preconditions behind assigning a piece of terminology to something. Also, as a general rule, for "trading firms", $n$ is used to refer to the day, $k$ is used to refer to the index of the "employee".\n\n# Proof Formation #\n\nOk, how would a "trading firm" extract guaranteed value from an infinite sequence of $\\varepsilon$-ROI traders, without going unboundedly into debt? To leverage intuition, lets say that the trading firm starts off with 1 dollar and can't go into debt, and has to earn infinite money. This is equivalent to starting off with 0 dollars and never going more than 1 dollar into debt. \n\nWell, **assuming** the magnitude of all the traders is 1 or less, the "trading firm" can just copy what employee 1 does, and since their magnitude is 1 or less, the "trading firm" at its lowest point would just be bankrupt. It could keep copying employee 1 until it could be guaranteed that employee 1 had a certain amount of net value according to all remaining worlds. At that point, the "trading firm" has a value of a bit more than 1, and can pick the employee that's *just* getting hired that day, and copy *them* until they would have a certain amount of guaranteed value earned, and continue this process forever, picking up a tiny bit of value from copying each trader they pick out, and earning unbounded money without going into debt.\n\nRemaining problems:\n\n - Making sure the magnitude of all traders is 1 or less.\n - The "trading firm" just blindly outputs "employees", which just blindly output trading strategies, which are what *actually* reacts to the market. How can the "trading firm" *possibly know* that the trades made so far have earned guaranteed value according to all $\\mathcal{PC}$ worlds? It wouldn't even know the actual trades made by the trading strategy since it doesn't have access to the market prices, let alone how all the $\\mathcal{PC}$ worlds value those shares!\n - Formalizing this all into actual math.\n\n# Solution to the Scaling Problem #\n\n**Assuming** that the magnitude of the sequence of "employees" is $\\overline{\\mathbb{P}}$-generable (refer back to definitions), it's possible to define an expressible feature that acts as a "screening term" scaling their magnitude down to 1 so they can be safely copied.\n\n By the definition of $\\overline{\\mathbb{P}}$-generable, the sequence $\\overline{\\alpha}$ of the magnitude of the employees has an efficiently-computable $\\mathcal{EF}$-sequence $\\overline{\\xi}$ that, when run on market data, returns $\\overline{\\alpha}$. \n\nThus, given an "unsafe" employee $\\overline{T}^{k}$ that trades $\\alpha_{k}$ shares over all time, it will take $poly(k)$ time to write down the expressible feature $\\xi_{k}$ that'll replicate $\\alpha_{k}$ when exposed to market data. Then, it's possible to define a "safe" employee that consists of the code for the "unsafe" employee except that every "trading strategy" they output is multiplied by $max(1,\\xi_{k})^{-1}$ and that'll automatically scale the magnitude down to 1 if it was above 1, and leave it alone if the magnitude was less than 1.\n\nThis "screening" strategy leaves all the relevant information alone. If you take an original trader that has $\\varepsilon$-ROI, and was efficiently emulatable %note: so the algorithm for the k'th "employee" can be written down in poly(k) time, doesn't trade on days before k, and takes poly(n) time to write down the trading strategy for the n'th day.%, scaling it down doesn't affect that. \n\n%%hidden(Why??): \nMultiplying all trades by the same constant just scales down the final "pile of shares" by that constant, so if it earned $\\varepsilon\\times magnitude$ value before with its pile-o-shares, it'll earn $\\varepsilon\\times magnitude$ value after scaling with its scaled pile-o-shares. If it takes $poly(k)$ time to write down the algorithm for $\\overline{T}^{k}$, it'll take $poly(k)$ time to write down the algorithm for the screened version since it only takes $poly(k)$ time to generate $\\xi_{k}$, and not much more to generate the scaling factor. If a trader doesn't trade on certain days, multiplying its trades by a constant will still result in not trading on those days. If it took $poly(n)$ time or less for $\\overline{T}^{k}$ to write down a trade before, it'll still take (a larger) $poly(n)$ time or less to write down the trade afterwards, the trader is just writing down the cached scaling factor/expressible feature $max(1,\\xi_{k})^{-1}$ at the end of every expressible feature. \n\nThus, scaling the "unsafe" trader down doesn't affect its ability to be efficiently emulated, and it doesn't affect the $\\epsilon$-ROI property.\n%%\n\nThus, at the cost of an extra condition, (having $\\overline{\\mathbb{P}}$-generable sequences of magnitudes), the assumption that $\\alpha_{k}\\le 1$ can be fulfilled.\n\n# Solution to the Knowledge Problem #\n\nRemember how the deductive process, $\\overline{D}$, and the market, $\\overline{\\mathbb{P}}$, are computable? That *technically* means that our "blind" trading firm can emulate them from scratch, a-priori, without looking at any market information, and use them to assess whether an "employee" has acquired guaranteed value yet.\n\nRemember that the algorithm given in the paper to compute $\\overline{\\mathbb{P}}$ runs in, *at best*, double-exponential time, and the "trading firm" would also have to emulate $\\overline{D}$ and keep track of the exponentially many $\\mathcal{PC}$ partial worlds at each stage. Also, the "trading firm" has to run in polynomial time.\n\nSo, needless to say, this "trading firm" is going to take a *hideously* long time to realize that the trader has acquired guaranteed value. If the trader acquires guaranteed value after 1,000 days, the "trading firm" is only going to realize it on day $X$, where $X$ is some number huge enough that $X$ computing steps are enough to emulate 1,000 days worth of deductive process outputs and market history and find every $\\mathcal{PC}$ world so far and see what they all think of the employee's pile of shares. Absolutely no way this is running in real life.\n\n# Tying it All Together #\n\nSo, to summarize, the "blind" trading firm actually can figure out whether a fixed trader has accrued guaranteed value, *eventually*, by *simulating the entire market from scratch*, using $poly(k)$ computing time. Also, given an efficiently emulatable sequence of $\\varepsilon$-ROI traders, it's possible to scale them down so all their magnitudes are 1 or less. This is enough to begin crystallizing the verbal strategy into math.\n\nOk, so how should we formalize "guaranteed net value"? Think about it for a while and don't open the button until you think you see how to crystallize it in math.\n\n%%hidden(Spoiler):\nThe basics of "guaranteed net value" is that both of the two following conditions are fulfilled. "almost all trades have been made already" and "the total trades that have been made so far have some guaranteed value according to all the $\\mathcal{PC}$ worlds, enough to cover the losses from the remaining trades".\n\n"Almost all trades have been made already" is equivalent to $\\displaystyle\\sum_{i\\le m}||T_{i}^{k}(\\overline{\\mathbb{P}})||_{mg}\\ge\\left(1-\\frac{\\varepsilon}{3}\\right)\\alpha_{k}$.\n\nPretty much, only an $\\frac{\\varepsilon}{3}$ fraction of the total lifetime amount of trading is left.\n\n"The total trades that have been made so far have some guaranteed value according to all the $\\mathcal{PC}$ worlds" is equivalent to $\\forall\\mathbb{W}\\in\\mathcal{PC}(D_{n}),\\mathbb{W}\\left(\\displaystyle\\sum_{i\\le m}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)\\ge\\left(\\frac{2\\varepsilon}{3}\\right)\\alpha_{k}$\n\nPretty much, a $\\frac{2\\varepsilon}{3}$ fraction of the total lifetime amount of trading has been earned in the form of guaranteed value. Note that this is only guaranteed **assuming** that the trader has $\\varepsilon$-ROI.\n\nNotice that even if everything goes to hell from this point on, only $\\frac{\\varepsilon}{3}\\alpha_{k}$ value can possibly be lost, since there are so few trades left, and the magnitude of remaining trades provides a hard bound on how much could be lost. Also, since there's $\\frac{2\\varepsilon}{3}\\alpha_{k}$ guaranteed value that has been earned, once the employee has hit this point, it can't possibly lose, it'll have $\\frac{\\varepsilon}{3}\\alpha_{k}$ net worth guaranteed, in the limit.\n\nBy the earlier discussion about how the "trading firm" can just directly emulate the market (at a massive time cost), both of these are indeed computable, if infeasible. There will be *some* $n\\gg m$ where it's verifiable in $n$ steps that yes, on day m, the k'th employee has "guaranteed net value", or, using the terminology from the paper, "its holdings have matured"\n%%\n\nAnd now that "guaranteed net value" has been formalized, now let's try to define the trading firm in such a way that it precisely captures the "start with a dollar, copy traders but only risk at most 1 dollar until they have guaranteed net value, copy more traders" strategy. Again, if you want to get better at math, think of how to do it before opening the spoiler. \n\n%%hidden(Spoiler):\nOk, we'll want a "budget on day $k$" variable for how much money would be left over if all the employees with uncertain net value lost everything. And this will be used as a scaling factor for the $k$'th employee. If the budget on day 1,000 is 50 cents, then the trading firm will have a scaling factor of 0.5 on all future trades of the 1,000th employee. After all, even if their magnitude is 1, and all their trades are disastrous, only 50 cents at most could be lost.\n\nSo, $T_{n}:=\\displaystyle\\sum_{k\\le n}\\beta_{k}^{\\dagger}T_{n}^{k}$\n\nThe day-$n$ trade of the trading firm as a whole is the sum of the trades of all the employees so far on day n, with each employee individually scaled down by their personal "budget" feature.\n\nAnd, to define the "budget" feature... it starts at 1 and subtracts the sum of the total money risked by all employees with uncertain net value.\n\n$open(i,k):=0$ if it's verifiable in $k$ computation steps that $\\overline{T}^{i}$ has guaranteed net value, otherwise it's 1. We'll use this to only select the employees with uncertain net value.\n\n$\\beta_{k}^{\\dagger}:=1-\\displaystyle\\sum_{i<k}open(i,k)\\beta_{i}^{\\dagger}\\alpha_{i}^{\\dagger}$\n\nRemember, $\\dagger$ means it's an expressible feature. $\\beta_{i}^{\\dagger}$ is the expressible feature that returns the budget for day $i$, $\\alpha_{i}^{\\dagger}$ is the expressible feature that returns the magnitude of the $i$'th employee. So, since $open(i,k)=0$ for every employee that has uncertain net value on day k, yes, this takes 1 dollar and subtracts the total money being risked by copying all previous employees. (The $i$'th employee would risk $\\alpha_{i}$ dollars, and their trades get scaled down by a factor of $\\beta_{i}$ too.)%%\n\nAlright, looking good so far. But is the thing that was just defined an actual trader? Does it take poly(n) time to compute the trade? \n\n%%hidden(Answer):\n\nWhat's our "trading firm" going to have to compute on day $n$? Well, it's going to have to compute all the $\\overline{T}^{k}$, and compute all the $\\beta_{k}^{\\dagger}$, *and* run all the $\\overline{T}^{k}$ to return an actual trade when they're given $n$. \n\nWell, we already **assumed** that $(\\overline{T}^{k})_{k}$ was efficiently emulatable, so that does imply that it takes poly(k) time to generate $\\overline{T}^{k}$, and they all have the same poly(n) runtime bound.\n\nThat takes care of part of it. What is our "trading firm" going to have to do to compute all the $\\beta_{k}^{\\dagger}$? Well, it'll have to compute all the $\\alpha_{i}^{\\dagger}$, and all the $open(i,k)$.\n\nOk. To compute a particular $\\alpha_{i}^{\\dagger}$, it takes poly(i) time (because $\\overline{\\alpha}$ is a $\\overline{\\mathbb{P}}$-generable sequence). We're assembling a list of k of them, so it'll take $\\mathcal{O}(k*poly(k))$ time to generate that list. To compute a particular $open(i,k)$ value, it takes k steps, and we're doing that for k-1 different values of i, so it'll take $\\mathcal{O}(k^{2})$ time to generate that list of values. And previously computed values of $\\beta_{i}^{\\dagger}$ can be cached and reused.\n\nThus, since it takes polynomial time to generate all the $\\alpha$'s and all the $open$'s and generate and cache the $\\beta$'s as you go and add all the stuff up, $\\beta_{k}^{\\dagger}$ is computable in poly(k) time.\n\nSince it takes poly(k) time to generate $\\beta_{k}^{\\dagger}$ and poly(k) time to generate $\\overline{T}^{k}$ and we're doing this $n$ times over and adding them up and $k\\le n$, it takes poly(n) time to write down the entire collection of scaled traders. And since there are n of them and they all run in poly(n) time (and we're also summing them up), it takes poly(n) time for the whole computation to complete. Whew!%%\n\n# Backing Up a Bit #\n\nThe assumptions made so far:\n\n 1. There is an efficiently emulatable sequence of traders $(\\overline{T}^{k})_{k}$. This assumption was necessary for our "trading firm" to run in polynomial time.\n 2. Each employee $\\overline{T}^{k}$ has $\\varepsilon$-ROI. This assumption was necessary for there to exist a time at which the employee has guaranteed value, for all employees.\n 3. There's a $\\overline{\\mathbb{P}}$-generable sequence $\\overline{\\alpha}$ that replicates the magnitudes of the employees. This assumption was necessary to scale all the employees down to 1 or less, and to be able to define the budgeting terms and guaranteed value cutoff.\n\nThe goal:\n\n 1. Show that $\\displaystyle\\lim_{k\\to\\infty}\\alpha_{k}=0$.\n\nTime to use our arsenal of partial results to actually prove the lemma! We just need to show that the "trading firm" has bounded loss and unbounded earnings if $\\displaystyle\\lim_{k\\to\\infty}\\alpha_{k}\\not=0$.\n\n%%hidden(Final Proof):\n\nInstead of constantly writing out $\\alpha_{k}^{\\dagger}(\\overline{\\mathbb{P}})$ and $\\beta_{k}^{\\dagger}(\\overline{\\mathbb{P}})$, they will be abbreviated as $\\alpha_{k}$ and $\\beta_{k}$. Please remember, these are *numbers*, while $\\alpha_{k}^{\\dagger}$ and $\\beta_{k}^{\\dagger}$ are *functions*.\n\nBy how we defined $\\beta_{k}$\n$$\\beta_{k}:=1-\\sum_{i<k}open(i,k)\\beta_{i}\\alpha_{i}$$\nSince $\\alpha_{k}\\le 1$ because they were all scaled to be like that,\n$$\\beta_{k}\\alpha_{k}\\le 1-\\sum_{i<k}open(i,k)\\beta_{i}\\alpha_{i}$$\nBy relabling variables,\n$$\\beta_{n}\\alpha_{n}+\\sum_{k<n}open(k,n)\\beta_{k}\\alpha_{k}\\le 1$$\nBecause $\\beta_{n}\\alpha_{n}\\ge open(n,n)\\beta_{n}\\alpha_{n}$,\n$$\\sum_{k\\le n}open(k,n)\\beta_{k}\\alpha_{k}\\le 1$$\nBy the definition of $\\beta_{k}$ and the above line and induction, $\\beta_{k}\\ge 0$.\nThe value of $\\overline{T}$'s holdings at time n is\n$$\\mathbb{W}\\left(\\sum_{i\\le n}T_{i}(\\overline{\\mathbb{P}})\\right)=\\sum_{i\\le n}\\mathbb{W}\\left(T_{i}(\\overline{\\mathbb{P}})\\right)$$\nAnd by the definition of the trader $T_{n}$,\n$$=\\sum_{k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)$$\nand this can be split into two pieces, the value of the traders with uncertain value, and the value of the traders with guaranteed value.\n$$=\\sum_{uncertain, k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)+\\sum_{guaranteed, k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)$$\nLooking at the first term with the traders of uncertain value, the magnitude serves as a lower bound on the value of the trader according to all worlds. Also, $\\beta_{k}$ is the same for every $i$ in the inside sum, so it can be factored out. \n$$\\sum_{uncertain, k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)\\ge -\\sum_{uncertain, k\\le n}\\beta_{k}\\sum_{i\\le n}||T_{i}^{k}(\\overline{\\mathbb{P}})||_{mg}$$\nBy adding in the rest of the sequence, it increases the magnitude so far, and thus the potential losses. Also, the sum of the magnitudes of all trades made by the k'th employee is just $\\alpha_{k}$. Also, since we're summing up over the employees of uncertain value, $open(i,k)$ always equals 1, and equals 0 on every trader outside of that set.\n$$\\ge -\\sum_{uncertain, k\\le n}\\beta_{k}\\sum_{i\\in\\mathbb{N}^{+}}||T_{i}^{k}(\\overline{\\mathbb{P}})||_{mg} = -\\sum_{k\\le n}open(k,n)\\beta_{k}\\alpha_{k}$$\nThis is $\\ge -1$ by multiplying the 4th equation in this section by -1. Thus, the magnitude of the "uncertain" traders up to day n is -1 at worst.\n\nNow to analyze the value of the second term, assessing the wealth of the employees of guaranteed value. By the earlier discussion on the precise conditions for declaring a employee to have "guaranteed value", \n$$\\sum_{guaranteed, k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)\\ge\\sum_{guaranteed, k\\le n}\\frac{\\varepsilon}{3}\\beta_{k}\\alpha_{k}$$\nCombining these two lower bounds on the employees of guaranteed and uncertain value to assess the value of the trading firm on day n,\n$$\\sum_{k\\le n}\\mathbb{W}\\left(\\sum_{i\\le n}\\beta_{k}T_{i}^{k}(\\overline{\\mathbb{P}})\\right)\\ge -1+\\sum_{guaranteed, k\\le n}\\frac{\\varepsilon}{3}\\beta_{k}\\alpha_{k}$$\nSince we have previously established that $\\beta_{k}\\ge 0$ and $\\alpha_{k}\\ge 0$, the value of the trader as a whole is bounded below by -1, so it has bounded losses. Now, unbounded value must be demonstrated. This can be done by demonstrating that the second term diverges as $n\\to\\infty$.\n\nFor any employee $k$, there is an (extremely large) day $n$ at which it will be verified to have guaranteed value. Therefore,\n $$\\lim_{n\\to\\infty}\\sum_{guaranteed, k\\le n}\\frac{\\varepsilon}{3}\\beta_{k}\\alpha_{k}=\\sum_{k}\\frac{\\varepsilon}{3}\\beta_{k}\\alpha_{k}$$\nSince $\\frac{\\varepsilon}{3}$ is a constant, showing that $\\displaystyle\\sum_{k}\\beta_{k}\\alpha_{k}=\\infty$ is sufficient to show divergence. **Assume** that $\\displaystyle\\lim_{k\\to\\infty}\\alpha_{k}\\not=0$. Then there is some $\\delta$ such that $\\alpha_{k}>\\delta$ infinitely often. Also **assume** that\n$$\\displaystyle\\sum_{k}\\beta_{k}\\alpha_{k}$$\nis finite, for a proof by contradiction. Then there is some day $n$ such that \n$$\\sum_{i>n}\\beta_{i}\\alpha_{i}\\le\\frac{1}{2}$$\nLet $N$ be a day by which all employees with $k\\le n$ have been verified to have guaranteed value. Then\n$$\\sum_{i<N}open(i,N)\\beta_{i}\\alpha_{i}=\\sum_{i\\le n}0*\\beta_{i}\\alpha_{i}+\\sum_{n<i<N}open(i,N)\\beta_{i}\\alpha_{i}$$\nAnd by the fact that $n<N$ and $\\displaystyle\\sum_{i>n}\\beta_{i}\\alpha_{i}\\le\\frac{1}{2}$\n$$\\le 0+\\sum_{n<i<N}\\beta_{i}\\alpha_{i}\\le\\frac{1}{2}$$\nNow for a little bit of symbol manipulation.\n$$\\alpha_{k}\\beta_{k}=\\alpha_{k}\\left(1-\\sum_{i<k}open(i,k)\\beta_{i}\\alpha_{i}\\right)$$\n$\\displaystyle\\sum_{i>N}open(i,N)\\beta_{i}\\alpha_{i}\\le\\frac{1}{2}$ as previously demonstrated, and the same applies to any $k>N$. Since there are infinitely many days where $k>N$ and $\\alpha_{k}>\\delta$, all those infinitely many occasions will have\n$$\\alpha_{k}\\left(1-\\sum_{i<k}open(i,k)\\beta_{i}\\alpha_{i}\\right)\\ge\\alpha_{k}(1-\\frac{1}{2})\\ge\\frac{\\delta}{2}$$\nAnd this infinite recurrence of guaranteed value shows that \n$$\\sum_{k}\\alpha_{k}\\beta_{k}=\\infty$$, disproving the assumption it is finite.\n\nSince this sum diverges, by the discussion above, $\\displaystyle\\lim_{n\\to\\infty}\\sum_{guaranteed, k\\le n}\\frac{\\varepsilon}{3}\\beta_{k}\\alpha_{k}$ diverges so the "trading firm" has unbounded earnings.\n\nTo summarize, this trading firm has bounded losses, and **assuming** that $\\displaystyle\\lim_{k\\to\\infty}\\alpha_{k}\\not=0$, this trading firm has unbounded earnings. This is impossible for a logical inductor. Therefore, one of the following conditions *must* fail.\n\n - $\\varepsilon$-ROI on all "employees"\n - The sequence of "employees" is efficiently emulatable so they can be combined into a "trading firm"\n - There is a $\\overline{\\mathbb{P}}$-generable sequence $\\overline{\\alpha}$ that describes the magnitude of the sequence of "employees"\n - $\\displaystyle\\lim_{k\\to\\infty}\\alpha_{k}\\not=0$\n\nProving all four forces a contradiction, and future proofs will proceed by proving these four properties.\n%%', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'AlexAppel' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22483', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '18', type: 'newEdit', createdAt: '2017-04-22 05:48:49', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22482', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '17', type: 'newEdit', createdAt: '2017-04-22 05:36:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22481', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '16', type: 'newEdit', createdAt: '2017-04-22 05:17:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22480', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '15', type: 'newEdit', createdAt: '2017-04-22 05:05:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22479', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '14', type: 'newEdit', createdAt: '2017-04-22 04:46:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22478', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '13', type: 'newEdit', createdAt: '2017-04-21 23:24:50', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22476', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '12', type: 'newEdit', createdAt: '2017-04-21 07:33:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22475', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '11', type: 'newEdit', createdAt: '2017-04-21 07:06:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22474', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '10', type: 'newEdit', createdAt: '2017-04-21 06:10:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22473', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '9', type: 'newEdit', createdAt: '2017-04-21 06:08:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22472', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '8', type: 'newEdit', createdAt: '2017-04-21 06:06:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22471', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '7', type: 'newEdit', createdAt: '2017-04-21 06:05:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22470', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '6', type: 'newEdit', createdAt: '2017-04-21 06:03:56', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22469', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '5', type: 'newEdit', createdAt: '2017-04-21 06:02:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22468', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '4', type: 'newEdit', createdAt: '2017-04-21 06:00:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22467', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '3', type: 'newEdit', createdAt: '2017-04-21 01:57:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22466', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '2', type: 'newEdit', createdAt: '2017-04-21 01:48:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22464', pageId: 'e_ROI_Lemma', userId: 'AlexAppel', edit: '1', type: 'newEdit', createdAt: '2017-04-21 01:37:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }