The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.
NASSIM NICHOLAS TALEB, essayist and former mathematical trader, is Distinguished Professor of Risk Engineering at NYU’s Polytechnic Institute. He is the author the international bestseller The Black Swan and the recently published Antifragile: Things That Gain from Disorder. (US: Random House; UK: Penguin Press)
Nassim Nicholas Taleb's Edge Bio
UNDERSTANDING IS A POOR SUBSTITUTE FOR CONVEXITY (ANTIFRAGILITY)
Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, "aim"), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.
MISSING THE ASYMMETRY
The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics—even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.
The point we will be making here is that logically, neither trial and error nor "chance" and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.
The beneficial properties have to reside in the type of exposure, that is, the payoff function and not in the "luck" part: there needs to be a significant asymmetry between the gains (as they need to be large) and the errors (small or harmless), and it is from such asymmetry that luck and trial and error can produce results. The general mathematical property of this asymmetry is convexity (which is explained in Figure 1); functions with larger gains than losses are nonlinear-convex and resemble financial options. Critically, convex payoffs benefit from uncertainty and disorder. The nonlinear properties of the payoff function, that is, convexity, allow us to formulate rational and rigorous research policies, and ones that allow the harvesting of randomness.
Figure 1- More Gain than Pain from a Random Event. The performance curves outward, hence looks "convex". Anywhere where such asymmetry prevails, we can call it convex, otherwise we are in a concave position. The implication is that you are harmed much less by an error (or a variation) than you can benefit from it, you would welcome uncertainty in the long run.
OPAQUE SYSTEMS AND OPTIONALITY
Further, it is in complex systems, ones in which we have little visibility of the chains of cause-consequences, that tinkering, bricolage, or similar variations of trial and error have been shown to vastly outperform the teleological—it is nature's modus operandi. But tinkering needs to be convex; it is imperative.
Take the most opaque of all, cooking, which relies entirely on the heuristics of trial and error, as it has not been possible for us to design a dish directly from chemical equations or reverse-engineer a taste from nutritional labels. We take hummus, add an ingredient, say a spice, taste to see if there is an improvement from the complex interaction, and retain if we like the addition or discard the rest. Critically we have the option, not the obligation to keep the result, which allows us to retain the upper bound and be unaffected by adverse outcomes.
This "optionality" is what is behind the convexity of research outcomes. An option allows its user to get more upside than downside as he can select among the results what fits him and forget about the rest (he has the option, not the obligation). Hence our understanding of optionality can be extended to research programs — this discussion is motivated by the fact that the author spent most of his adult life as an option trader. If we translate François Jacob's idea into these terms, evolution is a convex function of stressors and errors —genetic mutations come at no cost and are retained only if they are an improvement. So are the ancestral heuristics and rules of thumbs embedded in society; formed like recipes by continuously taking the upper-bound of "what works". But unlike nature where choices are made in an automatic way via survival, human optionality requires the exercise of rational choice to ratchet up to something better than what precedes it —and, alas, humans have mental biases and cultural hindrances that nature doesn't have.
Optionality frees us from the straightjacket of direction, predictions, plans, and narratives. (To use a metaphor from information theory, if you are going to a vacation resort offering you more options, you can predict your activities by asking a smaller number of questions ahead of time.)
While getting a better recipe for hummus will not change the world, some results offer abnormally large benefits from discovery; consider penicillin or chemotherapy or potential clean technologies and similar high impact events ("Black Swans"). The discovery of the first antimicrobial drugs came at the heel of hundreds of systematic (convex) trials in the 1920s by such people as Domagk whose research program consisted in trying out dyes without much understanding of the biological process behind the results. And unlike an explicit financial option for which the buyer pays a fee to a seller, hence tend to trade in a way to prevent undue profits, benefits from research are not zero-sum.
THINGS LOVE UNCERTAINTY
What allows us to map a research funding and investment methodology is a collection of mathematical properties that we have known heuristically since at least the 1700s and explicitly since around 1900 (with the results of Johan Jensen and Louis Bachelier). These properties identify the inevitability of gains from convexity and the counterintuitive benefit of uncertainty ii iii. Let us call the "convexity bias" the difference between the results of trial and error in which gains and harm are equal (linear), and one in which gains and harm are asymmetric ( to repeat, a convex payoff function). The central and useful properties are that a) The more convex the payoff function, expressed in difference between potential benefits and harm, the larger the bias. b) The more volatile the environment, the larger the bias. This last property is missed as humans have a propensity to hate uncertainty.
Antifragile is the name this author gave (for lack of a better one) to the broad class of phenomena endowed with such a convexity bias, as they gain from the "disorder cluster", namely volatility, uncertainty, disturbances, randomness, and stressors. The antifragile is the exact opposite of the fragile which can be defined as hating disorder. A coffee cup is fragile because it wants tranquility and a low volatility environment, the antifragile wants the opposite: high volatility increases its welfare. This latter attribute, gaining from uncertainty, favors optionality over the teleological in an opaque system, as it can be shown that the teleological is hurt under increased uncertainty.
The point can be made clear with the following. When you inject uncertainty and errors into airplane ride (the fragile or concave case) the result is worsened, as errors invariably lead to plane delays and increased costs —not counting a potential plane crash. The same with bank portfolios and fragile constructs. But it you inject uncertainty into a convex exposure such as some types of research, the result improves, since uncertainty increases the upside but not the downside. This differential maps the way. The convexity bias, unlike serendipity et al., can be defined, formalized, identified, even on the occasion measured scientifically, and can lead to a formal policy of decision making under uncertainty, and classify strategies based on their ex ante predicted efficiency and projected success, as we will do next with the following 7 rules.
SEVEN RULES OF ANTIFRAGILITY (CONVEXITY) IN RESEARCH
Next I outline the rules. In parentheses are fancier words that link the idea to option theory.
1) Convexity is easier to attain than knowledge (in the technical jargon, the "long-gamma" property): As we saw in Figure 2, under some level of uncertainty, we benefit more from improving the payoff function than from knowledge about what exactly we are looking for. Convexity can be increased by lowering costs per unit of trial (to improve the downside).
2) A "1/N" strategy is almost always best with convex strategies (the dispersion property):following point (1) and reducing the costs per attempt, compensate by multiplying the number of trials and allocating 1/N of the potential investment across N investments, and make N as large as possible. This allows us to minimize the probability of missing rather than maximize profits should one have a win, as the latter teleological strategy lowers the probability of a win. A large exposure to a single trial has lower expected return than a portfolio of small trials.
Further, research payoffs have "fat tails", with results in the "tails" of the distribution dominating the properties; the bulk of the gains come from the rare event, "Black Swan": 1 in 1000 trials can lead to 50% of the total contributions—similar to size of companies (50% of capitalization often comes from 1 in 1000 companies), bestsellers (think Harry Potter), or wealth. And critically we don't know the winner ahead of time.
Figure 3-Fat Tails: Small Probability, High Impact Payoffs: The horizontal line can be the payoff over time, or cross-sectional over many simultaneous trials.
3) Serial optionality (the cliquet property). A rigid business plan gets one locked into a preset invariant policy, like a highway without exits —hence devoid of optionality. One needs the ability to change opportunistically and "reset" the option for a new option, by ratcheting up, and getting locked up in a higher state. To translate into practical terms, plans need to 1) stay flexible with frequent ways out, and, counter to intuition 2) be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option.
This explains why matters such as strategic planning have never born fruit in empirical reality: planning has a side effect to restrict optionality. It also explains why top-down centralized decisions tend to fail.
4) Nonnarrative Research (the optionality property). Technologists in California "harvesting Black Swans" tend to invest with agents rather than plans and narratives that look good on paper, and agents who know how to use the option by opportunistically switching and ratcheting up —typically people try six or seven technological ventures before getting to destination. Note the failure in "strategic planning" to compete with convexity.
5) Theory is born from (convex) practice more often than the reverse (the nonteleological property). Textbooks tend to show technology flowing from science, when it is more often the opposite case, dubbed the "lecturing birds on how to fly" effect v vi. In such developments as the industrial revolution (and more generally outside linear domains such as physics), there is very little historical evidence for the contribution of fundamental research compared to that of tinkering by hobbyists. vii Figure 2 shows, more technically how in a random process characterized by "skills" and "luck", and some opacity, antifragility —the convexity bias— can be shown to severely outperform "skills". And convexity is missed in histories of technologies, replaced with ex post narratives.
6) Premium for simplicity (the less-is-more property). It took at least five millennia between the invention of the wheel and the innovation of putting wheels under suitcases. It is sometimes the simplest technologies that are ignored. In practice there is no premium for complexification; in academia there is. Looking for rationalizations, narratives and theories invites for complexity. In an opaque operation to figure out ex ante what knowledge is required to navigate is impossible.
7) Better cataloguing of negative results (the via negativa property). Optionality works by negative information, reducing the space of what we do by knowledge of what does not work. For that we need to pay for negative results.
Some of the critics of these ideas —over the past two decades— have been countering that this proposal resembles buying "lottery tickets". Lottery tickets are patently overpriced, reflecting the "long shot bias" by which agents, according to economists, overpay for long odds. This comparison, it turns out is fallacious, as the effect of the long shot bias is limited to artificial setups: lotteries are sterilized randomness, constructed and sold by humans, and have a known upper bound. This author calls such a problem the "ludic fallacy". Research has explosive payoffs, with unknown upper bound —a "free option", literally. And we have evidence (from the performance of banks) that in the real world, betting against long shots does not pay, which makes research a form of reverse-bankingviii .
i Jacob, F. , 1977, Evolution and tinkering. Science, 196(4295):1161–1166.
ii Bachelier, L. ,1900, Theorie de la spéculation, Gauthiers Villard.
iii Jensen, J.L.W.V., 1906, “Sur les fonctions convexes et les inégalités entre les valeurs moyennes.” Acta Mathematica 30.
iv Take F[x] = Max[x,0], where x is the outcome of trial and error and F is the payoff. ∫ F(x) p(x) dx ≥ F(∫ x p(x)) , by Jensen's inequality. The difference between the two sides is the convexity bias, which increases with uncertainty.
v Taleb, N., and Douady, R., 2013, "Mathematical Definition and Mapping of (Anti)Fragility",f.. Quantitative Finance
vi Mokyr, Joel, 2002, The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, N.J.: Princeton University Press.
vii Kealey, T., 1996, The Economic Laws of Scientific Research. London: Macmillan.
viii Briys, E., Nock,R. ,& Magdalou, B., 2012, Convexity and Conflation Biases as Bregman Divergences: A note on Taleb's Antifragile.
ii Bachelier, L. ,1900, Theorie de la spéculation, Gauthiers Villard.
iii Jensen, J.L.W.V., 1906, “Sur les fonctions convexes et les inégalités entre les valeurs moyennes.” Acta Mathematica 30.
iv Take F[x] = Max[x,0], where x is the outcome of trial and error and F is the payoff. ∫ F(x) p(x) dx ≥ F(∫ x p(x)) , by Jensen's inequality. The difference between the two sides is the convexity bias, which increases with uncertainty.
v Taleb, N., and Douady, R., 2013, "Mathematical Definition and Mapping of (Anti)Fragility",f.. Quantitative Finance
vi Mokyr, Joel, 2002, The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, N.J.: Princeton University Press.
vii Kealey, T., 1996, The Economic Laws of Scientific Research. London: Macmillan.
viii Briys, E., Nock,R. ,& Magdalou, B., 2012, Convexity and Conflation Biases as Bregman Divergences: A note on Taleb's Antifragile.
December 17, 2012
This Is Not a Profile of Nassim Taleb
Monica HellstrÖm for The Chronicle Review
Ihad lunch with Nassim Nicholas Taleb. It didn't go well.
We met at a French cafe in Manhattan, on the Upper West Side, not far from Columbia University. It was a meeting more than a year in the making. I first e-mailed him when his book of aphorisms,The Bed of Procrustes,was published to see if he might submit to an interview. This, I realized, was a long shot. Taleb, best known as the author of The Black Swan, a book about how we underestimate the improbable, isn't much for interviews and regards most journalists as fools and phonies, right alongside professional academics and bureaucrats. I didn't expect to hear back.
Lo and behold, he agreed to an interview. Before we could hash out the details, though, Carlin Romano wrote a review of The Bed of Procrustes for The Chronicle.The headline was "The Bed of Crusty," so right away it didn't sound favorable. It wasn't. Romano dismissed Taleb as a "would-be aphorist with a major tin ear." I explained to Taleb that, while Romano and I write for the same publication, we had never met and I didn't know about the review in advance. He was not mollified and backed out, with apologies. Who could blame him?
Then, last summer, I learned that he had a new book coming out. Not a slim volume of maxims and observations but rather a meaty treatise. I e-mailed him again, and we spoke on the phone. He seemed excited about the possibility of an article, giddy even, perhaps because he thought it would stick it to the academics he regards with contempt. In previous books, he told me, he had held back, pulled a punch or two. Not this time. If they wanted to come at him with lawyers and pitchforks, so be it. Taleb sent me a PDF of the manuscript, titled Antifragile: Things That Gain From Disorder, which he hadn't quite completed. It had yet to be edited, and he was still working on the conclusion.
I read it. Afterward, I sent him an e-mail, calling the book "engaging and stimulating throughout." Say what you want about Taleb's writing—and Romano is not the only critic—he doesn't produce antiseptic prose, and there's something fun about his surly, middle-finger-to-the-experts attitude. And the digressions! One moment he's telling you why convexity leads to philostochasticity and the next he's explaining why he doesn't eat papayas. For the record, he avoids all fruits without a Greek or Hebrew name because his ancestors would not have eaten them. And he drinks only beverages that are at least a thousand years old. Don't offer the man an orange Shasta.
Taleb, now in his early 50s, lives his philosophy and believes everyone else should too. You must have "skin in the game," as he puts it repeatedly. He uses that phrase, by my count, 28 times in Antifragile, and it's central to his worldview and integral to his critique of the "fragilista": the sucker who sits on the sidelines, who doesn't know what he thinks he knows, who lacks the pluck to risk his own fortune and reputation. Unlike Taleb. "I have only written, in every line I have composed in my professional life, about things I have done, and the risks I have recommended that others take or avoid were risks I have been taking or avoiding myself," he writes. "I will be the first hurt if I am wrong."
Here's an example. Taleb made a lot of money when the housing bubble burst in 2008. Common wisdom had it that housing prices go up, because they had always gone up. Taleb told me it was obvious to him that executives at Fannie Mae, the government-sponsored mortgage company, didn't understand the concept of "fat tails," that is, they didn't understand the extreme risks of the investments they held. In retrospect that's obvious, but it was not a widely held opinion back then. The handful who bet on the unthinkable made a killing, including Taleb.
He asked me how much I thought he made during the crisis.
"I don't know," I said.
"Guess."
"Five million?"
He laughed. "Try times 10," he said.
Later, he made a reference to $30-million, so I'm unsure of the exact figure, not that it matters: Taleb was already wealthy. He had made his first millions on Wall Street by age 27. "I became successful because I knew what I learned in school about probability was bullshit," he said. "That's when my war with academia started."
Taleb is in the university but not of it. He spent the first couple decades of his career as a derivatives trader before turning to scholarship and essay writing in his mid-40s. Taleb is a professor of risk engineering at the Polytechnic Institute of New York University. Despite his wall of degrees (he has an M.B.A. from the University of Pennsylvania's Wharton School and a doctorate from the University of Paris), he believes that universities propagate "touristification," another term he coined, a phenomenon that occurs when what should be an exciting exploration turns into a programmatic exercise. It's better to be an adventurer than a tourist. Education isn't the only result of this modern sin; gym machines and "the electronic calendar" fall short as well.
Taleb has a low opinion of most professors. He titles one section of the new book "The Charlatan, the Academic, and the Showman." In a chart, Taleb divides professions into three categories: fragile, robust, and antifragile. It's bad to be fragile, better to be robust, best to be antifragile. Artists and writers are antifragile. Postal employees and truck drivers are robust. Academics, bureaucrats, and the pope are fragile. Benedict, beware.
"I don't rely on external confirmation, and I have a happy life."
Most of Taleb's ire is directed at business schools, specifically the one at Harvard. At Harvard they "lecture birds to fly," then arrogantly claim credit when the fledglings become airborne. He rails against the "Soviet-Harvard delusion," linking an institution that's graduated thousands with a state that killed millions. What is the delusion, exactly? It is a belief in a top-down system that tries to control and protect, purportedly for mankind's benefit, thereby eliminating the natural stressors and necessary randomness that create strength and encourage enterprise. Dekulakization and course catalogs are symptoms of the same ailment.
Taleb has no patience for so-called structured learning. "Only the autodidacts are free," he writes in the book. He pursued his real education in his spare time, doing only as much as was required to pass his courses. At 13, he set himself a goal of reading for 30 to 60 hours a week, pretty much a full-time job. To prove that he hit the books with enthusiasm, Taleb ticks off the names of more than 30 great writers he has read. We don't learn much about what he gleaned from this ardent page-turning or which authors influenced his own style. He does give the following assessment of the work of Austrian novelist Stefan Zweig: "didn't like."
Actually, Antifragile feels like a compendium of people and things Taleb doesn't like. He is, for instance, annoyed by editors who "overedit," when what they should really do is hunt for typos; unctuous, fawning travel assistants; "bourgeois bohemian bonus earners"; meetings of any kind; appointments of any kind; doctors; Paul Krugman; Thomas Friedman; nerds; bureaucrats; air conditioning; television; soccer moms; smooth surfaces; Harvard Business School; business schools in general; bankers at the Federal Reserve; bankers in general; economists; sissies; fakes; "bureaucrato-journalistic" talk; Robert Rubin; Google News; marketing; neckties; "the inexorable disloyalty of Mother Nature"; regular shoes.
The social sciences make the list, too. He contrasts them with "smart" sciences, like physics. He mocks social scientists as mired in "petty obsessions, envy, and icy-cold hatreds," contrasting the small-mindedness of academe with the joie de vivre of the business world. "My experience is that money and transactions purify relations," he writes. "Ideas and abstract matters like 'recognition' and 'credit' warp them, creating an atmosphere of perpetual rivalry." In our interview, he went even further, saying he would "shut down" the social sciences. "Those guys are living in their own world," he said. "That is the truth. You don't need them."
I pointed out that he praises some psychologists, like Daniel Kahneman, and regularly refers to psychological concepts in Antifragile. Would he padlock the psych labs, too? No, he told me. "Psychology is more empirical," he clarified. Sociologists, on the other hand, would presumably be better off delivering mail.
He saves his iciest hate for economists. Taleb has no use for the "charlatanic" field, comparing economic research to medieval medicine. Economists are, in his estimation, weak, ignorant, fearful, and generally pathetic. At one point he fantasizes about beating up an economist in public.
Taleb singles out his least-favorite economists, including Robert C. Merton, a professor of finance at MIT, formerly of Harvard, and Myron Scholes, a professor emeritus of finance at Stanford, who jointly received the Nobel Prize in 1997 for their model of valuing derivatives that's designed to hedge against risk. Merton is "serious, mechanistic, boring," according to Taleb, and the two used "fictional mathematics" in their research. He calls this "unsettling" in a footnote, though in the earlier draft he sent me he used a harsher word. I'd wager that punch may have been pulled by Random House's legal department. Merton didn't return my messages, and Scholes politely declined to comment.
Gary Pisano, however, was willing to talk. Pisano, a professor of business administration at Harvard, is singled out in the book for his "dangerous" thinking; Taleb hammers him for supposedly misunderstanding the market for biotechnology. Pisano told me Taleb didn't know what he was talking about. "His argument is about these rare events that generate huge returns," he said. "That doesn't happen in biotech." The specifics of that debate aside, Pisano shrugged off the criticism and said he had enjoyed Taleb's work in the past: "I think he writes some very interesting and provocative things, but I think it gets a little lost in the manner."
The idea that Taleb's insights are sometimes overwhelmed by his belligerence is a longstanding criticism. Articles published in the American Statistician soon after The Black Swan appeared chastised him for his alleged ignorance of "entire subfields of statistics," committing mathematical errors, and lobbing "gratuitous insults" at statisticians. The opprobrium was mixed with gratitude that, whatever his faults, Taleb had managed to shine a bright light on an arcane topic. Still, you got the sense that statisticians were smarting. Taleb's fans—and there are many of them—see his abrasiveness as proof that he doesn't tolerate nonsense. They show up in droves to hear him speak, leave rapturous reviews on Amazon, and praise his television appearances. One YouTube commenter put it succinctly: "He's so awesome."
While Taleb dislikes the university system and doesn't respect career academics, he's not against education per se. Studying mathematics is fine for its own sake. And it's worthwhile to read the classics. But modern scholarship is bewitched by novel findings—what Taleb dubs "neomania"—and researchers are driven by their need to publish, perverting their efforts and tainting the outcome. "How can knowledge be something you do for professional advancement?" he asked. But, you might counter, Taleb is a professor at a university who publishes in journals. It would be one thing if he were blogging from a cabin somewhere, but isn't he part of the problem he's identified?
Ah, but he doesn't publish papers to advance his career. They are technical addenda to his popular books. "I ban myself from publishing anything outside of these footnotes," he writes in Antifragile. Because of his success, he is not beholden to deans and committees or anyone else, for that matter. "You cannot rely on external confirmation and have a happy life," he told me. "I don't rely on external confirmation, and I have a happy life."
Iwanted to know more about that happy life, which is why I flew to New York to meet Taleb. When he arrived at lunch, he was wearing a plain black shirt, black shorts, and sandals of some kind (not regular shoes, which, as stated earlier, he opposes). He writes in Antifragile that readers, upon meeting him, "have a rough time dealing with an intellectual who has the appearance of a bodyguard." I wouldn't have guessed bodyguard, though he is thicker—thanks to a newfound love of weightlifting—than he appeared in publicity shots for The Black Swan, published in 2007. Taleb has less hair these days, and more of it is gray. He speaks rapidly and conspiratorially, punctuating his remarks with "You see?"—though the way he says it is more imperative than interrogative. You will see.
We sat outside, where it was difficult to hear over the din from the street and the chatter of fellow diners. The waiter screwed up his order. Taleb seemed generally agitated and uncomfortable. That was understandable, I thought: He's been in his head, writing his opus, the book he believes is more significant than his big best seller, and then somebody starts poking at him before it's been delivered to the printer. That could put a person on edge. The double espresso he knocked back didn't help either.
After we ate, Taleb asked if I wanted to accompany him to a nearby bookstore. I said sure. When we arrived, he turned to me and asserted that any article I wrote should be in the form of a question-and-answer column. I bumbled a response, telling him that's not what I had in mind (indeed, in an e-mail, I had used the word "profile" twice). This was unacceptable to him. "Go write fiction then!" he exclaimed. "I haven't given you enough for a profile anyway!" We parted on bad terms and exchanged a few curt e-mails the next day. A planned follow-up—we were going to rendezvous at a restaurant in his neighborhood—didn't happen.
Taleb writes about storming out of meetings with publishers and interviews with radio stations. That usually happens when he feels he's been insulted. The publisher suggests he take speaking lessons or the radio host tells him his answer is too complicated. Perhaps I accidentally insulted him or didn't sufficiently appreciate his ideas. Or maybe my questions about his weightlifting and dietary habits were too intrusive. I don't know what set him off. But considering his history, maybe I should have seen it coming.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.