Hacker News new | past | comments | ask | show | jobs | submit login
R49081 Is Prime (mersenneforum.org)
160 points by colinprince on Nov 21, 2022 | hide | past | favorite | 95 comments



R49081 means this was a ~49081 digit number, or a number that could be represented in maybe 163043-ish bits or ~20kB or so?

I'm kinda curious if this is a data-size that might be better on CPU rather than GPU. GPUs of course are better compute, but GPU-registers tend to cap out at ~1024 bytes or so, and a number of this size probably would have to be stored in RAM (plus all the associated data structures in ... whatever algorithm happened here).

If this were small enough, I can kinda imagine CPUs being better due to caching effects (L1 and L2 are incredibly fast on CPU, and GPUs kind of miss those tiers in some sense since GPUs focus on register space).

It could just be that this eliptical-curve prime-testing program is currently a CPU kinda function. But I'm curious if this algorithm would be (or wouldn't be) suitable for GPU compute, with more TFLOPS and more VRAM bandwidth for cheaper.

EDIT: A 20kB number would fit inside of __shared__ memory on GPU, so perhaps various operations (multiplication and/or division??) could be parallelized onto a 256-wide/1024-wide workgroup (~1024 to 4096 bytes per clock tick, or ~5 to ~20 clock ticks to perform most kinds of operations with the 20kB number on one compute unit). It really depends on the kind of algorithm going on here, as well as what needs to be searched for. I did a brief read into ECPP testing, and I honestly don't get anything they're saying in the paper (A. O. L. ATKIN AND F. MORAIN, 1993. "ELLIPTIC CURVES AND PRIMALITY PROVING").


For the mersenne project (searching for prime numbers that are 110 million bits in size) GPUs are quite efficient.

One of the algorithms used, called PRP (probable prime), which uses the reciprocal of Fermat's little theorem (a^(p-1)==1 mod p if p is prime) involves computing 3^p mod p (where p is a 110million bits number). This is called "modular exponentiation", and is done efficiently with FFTs (Fast Fourrier Transform, used to implement multiplication, thus squaring).

The FFT of 110Mbits can be implemented efficiently on GPUs. The key for keeping the hot data in GPU registers or caches is "locality of data access". Although the FFT algorithm is non-local by excellence (has a tendency to access all the data all the time), it can be split into "blocks" which have a smaller hot-data size. And this allows efficient GPU implementations.

The problem with R49081 may be that it's a small number, thus hard to fill all the processing units of the GPU with the amount of parallel work the algo offers at this size. Another problem may be that the algorithm involving eliptic-curves is more complex, thus more work is needed to express it in GPU terms.

http://mersenne.org/


> The problem with R49081 may be that it's a small number, thus hard to fill all the processing units of the GPU with the amount of parallel work the algo offers at this size. Another problem may be that the algorithm involving eliptic-curves is more complex, thus more work is needed to express it in GPU terms.

Well, 20kB is small, but also large. Too large to fit inside of 32-bit registers on the GPU (or even in 200+ such registers), too small to really take advantage of the GPU's much faster VRAM.

Its an interesting size. Larger primes almost certainly would benefit from a GPU no doubt. Smaller primes also would benefit from GPU (see cryptocurrency miners, where the entire program fits inside of a singular shader).

20kB though? Its... a weird size. Very interesting. If it were much bigger, or smaller, I'd be confident about porting the algorithm over to the GPU. But its actually more intimidating to be at that size (especially since 64kB L1 caches of CPUs is so darn good).

If I were forced to do this in GPU space, I'd have one-workgroup per number. I'd hope that the majority of the compute-time were spent on multiplication/division routines that would benefit from all 1024-threads (maxed size workgroup). But it'd be a more difficult program to write than a larger or smaller number.


These are "repunit" primes, meaning that it's the digit "1" repeated N times, in this case 49081 times. So yes 20KB.


47.9KiB in fact, assuming we are using 8-bits to the byte.


Where are you getting that?

log2((10^49081 - 1) / 9) / 8 / 1024 ≈ 19.9

About 20 KiB


Ah. For some reason my head was locked in binary, and even then I was forgetting to convert Kbit->KiB. My only excuse is a bad night's sleep…


Only if you're representing it as an ASCII or UTF-8 string with one byte per character, or if you're using binary-coded decimal with 8 bits instead of the typical 4 for each decimal digit.

256 (3 digits) in base 2 is one byte, 65,536 (5 digits) is two bytes, 16,777,216 (8 digits) is three bytes, 4,294,967,296 (10 digits) is four bytes, etc. Those aren't 3, 5, 8, and 10 bytes, they're 1, 2, 3, and 4.


> R49081 means this was a ~49081 digit number, or a number that could be represented in maybe 163043-ish bits or ~20kB or so?

A single (very large) number, that I could not represent on any computer I owned until Christmas morning 1983.

Mad.


Intel is offering customer silicon on their latest architecture. So you could be ambitious and design your own block with as much SRAM as you can sacrifice other accelerator units for, depending on routing latency.

Edit: customer not custom, small difference but important if you're asking..


Custom / semi-custom is probably an old Altera FPGA kinda thing, which means verilog (or similar hardware-level programming languages), which is much more difficult to use in my experience.

I generally see the "compute" hierarchy as:

1. General Purpose CPUs -- easiest.

2. CPU acceleration -- AVX512, NEON, SVE, etc. etc. Specialized CPU instructions that only exist on certain versions of hardware.

3. General Purpose GPU -- OpenCL, DirectCompute, ROCm, CUDA. General purpose GPU instructions that work on a wide variety of hardware.

4. GPU accelerated units -- Raytracing, matrix-multiplication, bfloat16 support, wave intrinsics. I guess DirectX12 Ultimate has these but you definitely need to be checking to see if your hardware supports this before using them.

5. FPGAs -- Over here baby, but much much harder than #4 in practice.

-------

I know Intel has FPGA + Intel core chips, and that Microsoft has used them before for Bing search and other such projects. But no one ever told me it was easy.


> The certification took 20 months on an AMD 3990x computer (64 cores); and verification took about 13 hours.

I'm absolutely dumbfounded (and awestruck) that there are people who dedicate this much compute power to solving these tasks.


I'm more surprised, assuming their algorithm is somewhat parallelizable, that they couldn't run their algo on an actual supercomputer, it likely wouldn't have taken 20 months.

EDIT: reading a wiki page, this sounds like recreational mathematics, not actual research math, which is a shame really


> that they couldn't run their algo on an actual supercomputer, it likely wouldn't have taken 20 months

I am assuming that is how the result was verified much faster - >1,000x the computer power used. Without looking deeper (like actually reading the article!) I couldn't say way that would be in this particular case, but such a disparity between power used to make a finding and power used to verify it by repetition is often the case when the initial result is discovered by a recreational researcher or small group, not a better funded group.

Not unlike the human scaling seen when one individual or a small group publishes a proof that could be significant, and many others poor over it to see if they can find a flaw before it is generally accepted as a genuine new finding (an order of magnitude or more of human brains to check, than to initially find).


The results aren't verfied by running the same calculations again. Rather, there's a verification method used along with some information from the original solution - that's the certificate they mention.

Imagine factoring a non-prime number. It would take you quite long to factor 695587 with nothing but a cheap calculator, but you could very quickly verify my factorization if I give you the information that it is divisible by 71 and 101.


Though the factor-checking like verification won't work for “this number is prime” the same as it will for “this number is not prime”. A test that results in saying a number is prime doesn't output any factors to test because if such existed the number would not be prime.

I'll have to read up on the certification/verification process to comment any further though.


It does work that way, just not as straightforwardly.

See https://en.wikipedia.org/wiki/Certificate_%28complexity%29 if you're interested in learning more.


This problem is also hard to parallelize. There is a lot of reuired communication between computation lanes (sections of the number), and the calculation takes a lot of dependencies on that communication.

You could probably design a supercomputer specifically for prime searches, but it would be a weird one.


I've calculated ~20kB to store this number in particular.

GPU __shared__ memory is 64kB. A 1024-wide workgroup would be able to store 4096 bytes per register, if the whole workgroup was just working on one number. So ~6 registers x 1024 shader workgroup to store the full 20kB number.

__shared__ memory would allow for very quick shader-to-shader communications.

I think... it can work. But its difficult for me to visualize the full program. Its going to be complex, but probably doable. Its also not really a design pattern that a lot of GPU code (ex: Thrust) works in very well, you'd have to use harder to use libraries like CUB instead.


To be clear, this problem (prime searching) works well on a single compute node. It doesn't work well if you have to scale to many nodes, because communication latency quickly bounds your speed.

That's why GIMPS is the way it is: with volunteers offering computers that are separated and do a siloed calculation.


I can believe that.

But the fact that this was done on a Threadripper 3990x, a high-core count (but slow interconnect / core-to-core communication) kind of computer, makes me believe that there's a degree of parallelism in this problem.

Otherwise, another computer (ie: higher GHz / less core count) could have been used instead.

---------

I really don't know this specific algorithm: ECPP. It comes down to how that works... especially its parallelization.


AFAIK it comes down to efficient FFTs for multiplying the large numbers, which are well-suited to GPUs, and this could likely have been done a lot faster on a GPU.

I'm guessing there just isn't a good CUDA implementation out there.


Certification is the creation of a proof that the number is prime (consisting of some related numbers), while the verification checks that the certification is correct?


Yes. In particular, "certification" refers to the fact that the proof is computationally difficult to create, but once it's created, it's easy for a third-party to check its correctness (insert your P/NP reference here).

https://en.wikipedia.org/wiki/Primality_certificate


I was thinking that "certification" referred to ensuring that the program they wrote to test it was correct, and then verifying was running it, but your interpretation makes sense too


I'm equally dumbfounded that with all this computing power no one has written an AI that can identify primes 99.999% of the time for quicker testing. Train that thing and reverse its logic. I'm not one of those people who think primes are the keys to the universe. I'm not sure how one day people realized there's no pattern, just a form of embedded randomness, behind any irrational number like Pi... yet still think the prime sequence is more important than that.

[edit] it's as if we're so desperate for order to arise out of chaos that we'll bend reality to find the order in any sequence. Seems like our own problem, not God's.

[edit2] I am eagerly awaiting any response to know why this is such an unpopular comment! I'm excited to have my mind changed. Tell me.


> I am eagerly awaiting any response to know why this is such an unpopular comment! I'm excited to have my mind changed. Tell me.

I suspect it's because "an artificial intelligence that can identify primes for quicker testing" is a phrase that would probably be recognized by experienced developers in this space as a bad idea. There are already very good ways to identify prime candidates; no artificial intelligence is needed. They just take a while because even though they're very efficient, it takes a long time to operate on staggeringly huge numbers.

For a more relatable example, you can build an artificial intelligence that understands what you mean when you type "2 + 2". But this is a fantastically inefficient and expensive way to compute the value "4". Therefore, no one does this in a serious way. Even GPT-3, one of the more advanced commercially available models with hundreds of millions of parameters, struggles with multiplying two-digit numbers together (one study found a success rate of 20%).


How do you know you can't train a large image model to lean toward preferring large embedded primes?


I believe this is an unpopular comment because “training an AI to identify primes” is pretty much technobabble.

What sort of AI are you referring to? A deep neural network? Why should this be better than our current techniques?


Really? It's not techno babble. Encode the first few thousand primes in QR codes and pictures of ducks, throw 2000 GPUs at it and see how long it takes to train a model to pick them out of the other similarly encoded integers. Tune for success rate with the next few thousand primes. Even if it's only right 20% of the time you've vastly increased the efficiency of finding new prime numbers.


You're making a lot of generic guesses and assumptions. Do you really think nobody has ever thought of this?


I'm not sure this is an application where using AI makes sense. For example, there are a large number of algorithmic ways to determine if a number of probably prime -- https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality...


Why unpopular? You wrote "can identify primes 99.999% of the time for quicker testing", which shows several fundamental and basic flaws in your understanding.

We've had approximate primarily testing methods for a long time and they are quite effective. See https://primes.utm.edu/notes/prp_prob.html (citing https://www.ams.org/journals/mcom/1989-53-188/S0025-5718-198... ), for a 100,000-digit number the odds of Fermat's test for compositeness being a false is 1.3E-10584, which is 99.9999999999999{insert 560 nines here}999999999%.

This is why, in the AMS paper cited as [1] in this link, Dubner wrote "it is virtually certain that R49081 is prime." The full context is:

> In September 1999, it was discovered that R49081 is a probable prime. This was verified on several different computers with different software. Although it is virtually certain that R49081 is prime, it is necessary to prove rigorously that it is prime.

A 99.999%-likely test is far less virtually certain than the test used, and in any case doesn't achieve the goal of rigorously proving it's prime.

Even worse, these standard primality testing methods have a 0% false negative rate, while your hypothetical AI might miss actual repunits even with a 0.000001% rate.

> just a form of embedded randomness, behind any irrational number like Pi

This doesn't makes sense. There are irrational numbers with no randomness, like Liouville's constant and the Champernowne constant. And there are plenty of patterns in primes - the Ulam spiral Wikipedia entry even compares the clearly apparent structure of that spiral to a random distribution, at https://en.wikipedia.org/wiki/Ulam_spiral .

Your comment therefore comes across as very ill-informed, and your doubling down doesn't help.


It wasn't my contention that an AI could identify every single prime on the number line or that it could prove whether a number was prime. I was saying that an AI could increase the efficiency in finding candidates for testing.

I'm not going to triple down and say I know God exists or doesn't, I just made the observation that humans find order in chaos where no such order exists. A lot of algorithms generate the appearance of a pattern while having no real pattern and I don't see why primes are special. (practical usefulness of proof in crypto aside, which could also be done less elegantly with any set of irrational numbers calculated to a sufficiently difficult number of digits).


You were guessing that an AI could increase efficiency, using numbers which show you know nothing about the topic.

Primes have lots of order and patterns, so references to chaos and "no such order exists" aren't really relevant.

As to "why" - this is recreational mathematics. It's because people find it fun or engaging. And it isn't special - recreational mathematics covers far more than just primes. There's a lot of people who love calculating digits of pi, as one clear example.


>> There's a lot of people who love calculating digits of pi, as one clear example.

Right... that's what I meant about irrational numbers. I'm not trying to rain on the parade of the hobby of it all, but if there were an actual pattern, primes would not be useful in cryptography. The hobby of trying to find "the pattern" is Sunday morning armchair alchemy. Nothing wrong with it, and maybe there's a pattern to be found one day. Great. Whether there is or isn't is only relevant given that we don't know what that pattern is yet. It doesn't, as some imagine, imply the existence of a higher order intelligence.


As I pointed out irrational numbers can easily have a well-defined structure. Here's the start of the Champernowne constant:

  0.1234567891011121314151617181920212223...
It's not only irrational, but it's transcendental and normal.

Yet it has so much structure that you can easily figure out what any digit 'n' will be.

> The hobby of trying to find "the pattern" is

I have no idea what you are talking about with "the pattern".

> if there were an actual pattern, primes would not be useful in cryptography

Why do you think there are no patterns and that all of the following people are wrong?

* "Patterns in prime numbers" - https://mathsreach.org/Patterns_in_prime_numbers

* "Patterns in the Primes" - https://www.maa.org/meetings/calendar-events/patterns-in-the...

* "Math Mornings at Yale: The Patterns in the Primes, with Andrew Granville" - https://www.youtube.com/watch?v=pO7Egc5Dtqs

* "New Pattern Found in Prime Numbers" - https://phys.org/news/2009-05-pattern-prime.html

* '"Patterns in the Primes" by Stephanie Hanson - Week 5 - MathSoc Public Lecture' - https://www.youtube.com/watch?v=VkbPhjU-S9Q

Note that none of these refer to a definite "the pattern."

Primes are useful in the RSA cryptosystem because modulo exponentiation is cheap while in general factoring the product of two large prime numbers is far less tractable - as far as we know.

There are so many patterns in primes that cryptosystems have been broken by choosing the initial primes poorly. See https://en.wikipedia.org/wiki/RSA_(cryptosystem)#Faulty_key_... .


If we consider prime numbers above 1000 digits

  def is prime(n):
      return False
Would be 99.99% accurate.

The issue is without 100% certainty, there's not really any use for a "this is very likely probably prime" checker.


over a few million digits if you were trying for the Guinness book, wouldn't it be handy to have something that narrowed it down?


The time you spent writing this you could've googled mersenne primes and how candidates are selected. You would've found out there are loads of techniques to narrow them down, and that we're looking for specific types of primes as the first step in this. Then you would've avoided coming off as a dunning-kruger know-it-all...


Prime numbers are the basis of most practical cryptosystems (including ones which you used to make your comment), and other applications of number theory, and are the heart of number theory itself.


In other mersenne news, it has now been the longest ever wait for the discovery of a new largest mersenne prime since the GIMPS project started (nearly 4 years now):

https://www.mersenne.org/report_milestones/


This isn’t a Mersenne prime.

https://www.ams.org/journals/mcom/2002-71-238/S0025-5718-01-... says “The Repunit R49081 = (10⁴⁹⁰⁸¹ − 1)/9 is a probable prime.”, so we’re talking numbers that get written as all ones in decimal, not binary.


I’m referring to the forum name.


Question: how does the density of R(x) primes, with respect to x, compare to the density of primes. Are they the same?


We don’t even know whether there are infinitely many decimal Repunit primes (https://primes.utm.edu/glossary/page.php?sort=Repunit)

Even if that were true, we know there always is a prime between n and 2n (https://en.wikipedia.org/wiki/Bertrand%27s_postulate). Since there’s a factor of about 10 between successive repunits, the repunits themselves are already spaced further apart, let alone those that are prime (for which the first filter is “for R(x) to be prime, x must be prime”)

So, no, they aren’t equal.


Thanks for the answer.

I meant with respect to x.

So if R1 to R100 were all prime for example, for that range I consider it 100% dense by my definition.

Instead if 0.0000000…00000009% dense by computing: 100/R100.

Hope that makes sense.

My definition of course may be at odds with mathematical conventions.


Primality proof (which you can download!) is 114,314,839 bytes: http://factordb.com/index.php?id=1100000000013937242 (check section Primality)


Can someone ELI5 this post?


R49081 is the natural number consisting of 49081 “1” digits in decimal base, or (10^49081 - 1)/9. This number has now been proven to be a prime number, using some number-crunching algorithm. Somebody else will have to ELI5 that algorithm.


I don't think any of you have met five year olds.


This would be totally inexplicable to 5 year olds, except the odd genius. But we took it as intended i.e. "can someone explain this in a simpler way, defining terms like "R""


Idk, I'd try something like "you know how the number one is a single 1 digit, and eleven is two 1 digits? If you have 49081 one digits, that number is really big! We don't even have a word for it, so we call it R49081. Some very smart people worked really hard and found out that number is prime!", after explaining what "prime" is (which I think would be a lot easier).


I like that you've tried, however I have an 8yo (Y3 in UK, so 2nd grade US) and I don't think she would get it yet. She has just started simple devision while doing times tables. She hasn't covered the concept of remainders from devision, and has no concept of fractions. I could probably just about explain the concept of a Prime to her, but it's using concepts she hasn't learnt. She is also generally towards the top of her class for maths.

My 4.5yo will have no chance, more interested in sharks, trains or even better shark trains!


Don't forget that 5 year olds generally haven't learned addition or subtraction yet, let alone division (which is generally taught at 7 years AFAIK). Explaining what "prime" is will be difficult.


A number is prime if it's Numberblock character is a single column.

Numberblocks that are rectangles are not prime.

My son knew of "primality" at 5 because of this.

https://www.youtube.com/@Numberblocks


ELI5 posts are reduced to fantasy by the attention span of actual 5 year olds if nothing else. :-)


Not all five year olds are the same, and the premise is that a 5 year old who wanted to know asked.


Most young children want to know everything, it's the attention span to make it through an explanation of prime numbers that would be rare.


Okay, so let us try to explain the concept of looking for large primes by analogy so we don't have to explain all the mathy stuff. If you tried to convey this to an actual five year old you would probably have to speak more in questions to keep them engaged and interact more and use even simpler language but all that is hard to convey in written text in a language that I never spoke to a five year old. But I digress, so here we go:

You know how some things are red and some are not? There are probably multiple red things in the same room as you are right now. Most people can easily tell if something is red or not by simply looking at it.

But what about places we cannot see? Even though we have never been there, it is a save bet that there are red things in let's say New York. And if we wanted to make sure, we could book a flight and go check. But is there something red in every village in the world? My guess would be yes, but we do not know for sure and it will be hard to visit every place because there are so many.

At this point you might have explained some things, but if you have not lost them yet you could take this further.

There are places even further away than every village. We know that Mars is red because it is so big and people have sent a robot there to be absolutely sure. But what about other planets? It is hard to tell because planets can be very far away and red things can be very small. And going there is no option because even the fastest rocket would not get there any time soon.

What this post mentions is the equivalent of someone building a very precise telescope fixed at a very specific place very far away and they saw something red there. Here the analogy breaks because there are other more conceptual problems with building such a telescope. Otherwise, I am surprised how many similarities there are actually :)


Brain development I would guess too. We are talking 5 year olds. If a 5 year old can read and write well that is pretty advanced. Mathematics comes later. They might count on fingers, know some immediate additions, and vaguely understand the concept of multiplication.


Ok nice circles there! Rest of the owl is explaining prime factorization or indeed even just multiplication.


I knew nothing about Repunit numbers until a few minutes ago. So far, the most interesting bit for me was that:

> As of March 2022, the largest known prime number 282,589,933 − 1, the largest probable prime R8177207 and the largest elliptic curve primality prime R49081 are all repunits.

Source https://en.wikipedia.org/wiki/Repunit


Edit: the OP is correct, I was confused by the forum name, as I thought it was about Mersenne primes, but this is about Repunit primes.

Original response:

Almost correct, but it's 2^49081-1, not (10^49081-1)/9. And there's no division by 9 at all.

https://en.wikipedia.org/wiki/Mersenne_prime


No, this is just on the mersenne prime forum. If you read [1] from the link it tells you OP is right.


I think the parent is right. This seems to be a decimal repunit, not binary. I can't find a positive reference, though.


https://primes.utm.edu/top20/page.php?id=57, linked in the article, begs to differ.


Thank you! You're right, this is not about a Mersenne prime, but about Repunit prime. Sorry about that!


Is there a practical applicate for this, or is that more of a large pizza problem? https://pbs.twimg.com/media/CHtVwLXXAAAeKR6?format=jpg&name=...


So this number is prime? Cool.

  const R49081 = BigInt('1'.repeat(49081));


Apparently it's just that number of 1s followed by each other as a number.

https://en.wikipedia.org/wiki/Repunit

It's part of "recreational mathematics" which solves a longstanding question I've had after finding out about a "special" set of numbers that make me think "What the hell was that?"

Basically the question was "Why can't I just come up with any dumb rule I want? Like the number of letters in the number being equal to the number of digits in the base ten representation or some other noise like the number of toothpicks required to represent it in decimal and then do some wacky proofs regarding "toothpick count"?" Apparently the answer is "well you can!"

Alright then. There exists a non-serious branch of math.


> Like the number of letters in the number being equal to the number of digits in the base ten representation

You might not be happy that someone investigated this because you apparently made up this example in order to make fun of this kind of question, but I looked into this a little bit and it's more subtle than I first thought.

The typical random integer small enough to have a well-defined English name (following Guy and Conway's prefixes at https://en.wikipedia.org/wiki/Names_of_large_numbers#Extensi... which will go up to 10³⁰⁰⁰-1) has a significantly longer English name than its decimal expansion. E.g. if we take fifty random digits like

96689907044105050355231217362382284051155144740067

the name of this number is "ninetysixquindecillionsixhundredeightyninequattuordecillionninehundredseventredecillionfortyfourduodecilliononehundredfiveundecillionfiftydecillionthreehundredfiftyfivenonilliontwohundredthirtyoneoctilliontwohundredseventeenseptillionthreehundredsixtytwosextillionthreehundredeightytwoquintilliontwohundredeightyfourquadrillionfiftyonetrilliononehundredfiftyfivebilliononehundredfortyfourmillionsevenhundredfortythousandsixtyseven"

or 430 letters, while the number is written with only 50 digits.

For small numbers, the closest approach between the lengths of the two is 10 (len("ten") == 3, len("10") == 2). And the typical word length grows faster than the digit length.

But the lengths do cross over many times up in the illions, as the illion terminology is unusually short compared to the size of the number described. Specifically, the smallest integer where this occurs is 1,000,000,000, where len("onebillion") == len("1000000000"). That's the smallest member of your set.

There are other examples such as 10¹³+1, 10¹³+2, 10¹³+6:

len("tentrillionone") == len("tentrilliontwo") == len("tentrillionsix") == len("10000000000001")

And there are many more further out into the illions.

For any example in which "one", "two", or "six" appears in the name of a number, any of the others can be substituted and the property will still hold. Likewise for "four"-"five"-"nine" and "three"-"seven"-"eight".


I just noticed that there don’t seem to be as many stars out tonight as there usually are.


I understood that reference!

EDIT: the reference

https://en.m.wikipedia.org/wiki/The_Nine_Billion_Names_of_Go...


Thanks I'm pretty sure somebody had taken the time.

Here's another one for you. Could some uncountable infinities be cardinally smaller than countable infinites if we throw out the property that they must be countable by definition? As in you can say "this is smaller than that, I can demonstrate it's infinite but I can't do a countable demonstration."


> Could some uncountable infinities be cardinally smaller than countable infinites if we throw out the property that they must be countable by definition? As in you can say "this is smaller than that, I can demonstrate it's infinite but I can't do a countable demonstration."

No, trivially. If X has a smaller cardinality than Y then there is an injective function from X to Y (by definition), if Y is countable then there is an injective function from Y to the natural numbers (by definition), then by composing those two functions you have an injective function from X to the natural numbers i.e. X is countable.


Its not just any old dumb rule. Say I wanted to make something really unlikely to be divisible. Well, let's try taking something very large and very evenly divisible, and then add or subtract a 1 so that it's always just a tiny fraction off from being divisible. In the addition case we might get something like 10000001. In the subtraction case for the same number we would get 9999999, after which we could factor out 9 to get 1111111. You could also do it with numbers that don't line up with base 10. 360 is a highly divisible number (hence its use for angles) but 361 / 359 are going to be 1/360th out of step with those divisors (and in fact, 359 is prime).

So, for this reason, a repeating sequence of 1s is a natural choice to try and find prime numbers.


> There exists a non-serious branch of math.

Try paging through OEIS sometime... https://oeis.org/

There are many seemingly silly bits of math. However, the answer in mathematics is often not the most interesting part (or even interesting at all), and a lot of important mathematics was created to produce answers for seemingly silly or pointless questions.

If you have a silly math problem and it's answered quickly-- well then it didn't waste much time. If you have a silly math problem and it's fiendishly hard to solve, then the discoveries that lead to the solution aren't silly-- they teach us how to do something we didn't know how to do before and may have important and useful implications.

For things like these primarily proofs the mathematical discoveries have already happened (at least until someone comes up with a new one!)-- and the ongoing effort is in engineering for numerical software and the IT work required to manage the computation in a cost effective manner.

Some families of primes have important mathematical structure that has a lot of known consequences, special algorithms that work with them (such as repunit primes in base 2), etc. Other's don't, or at least don't have any that we know of yet-- but they still divide up the literally infinite space of primes and give people interested in studying them a subset they can focus on.


> There exists a non-serious branch of math.

This remains an unproven conjecture that many hope to be true . . .


well this gives me hope that the solemn attitudes surrounding recreational mathematics is just math kayfabe and I'm not actually nuts.

Although I will concede those are not mutually exclusive claims.


George Conway and others have at times lamented that whenever somebody arrives at something sufficiently complex to be intriguing and no apparent real world application ... military gentrification follows within a decade.


I may have misconstrued that ELI5 was snark for "elide", in the same vein as I18N or A11Y.

Present me wishes I could explain Explain ELI5 Like I'm 5 to my past self.


Biiiiiig number with only 1s is magical and doesn't fear the other numbers to cut it in bits and pieces, which is awesome and exciting because it's fun. Daddy only hopes it's got a solid self estime and doesn't hang out too much in those shady unary parts of the universe


Question for any mathematicians here: are properties of numbers that relate to their representation in particular number bases considered interesting or useful by mathematicians?

I ask because the wikipedia page for repunit (a term I wasn't familiar with before) implies that it is in the domain of recreational mathematics. So my question refers to professional / non-recreational mathematicians.


This is probably a stupid question, but why do they use an elliptic group rather than, e.g. an ordinary exponential group? It seems they’re using the property that the group order is N-1.


You can do the same thing with modular exponentiation, i.e. Pocklington's primality proving, but that requires finding a large factor of N-1, which is not easy in general.

With elliptic curves you get as many shots at finding a group order with a large factor as you want, since if you fail the current attempt you can simply try another curve with a different order. So you can keep trying until you find a curve that has an easy to find large group order factor which, as it turns out, happens in probabilistic polynomial time.


My understanding is that this particular proof works by constructing a curve over a finite field built of the number to be tested with a "sufficiently large" prime order, you prove its order by just multiplying a generator by the order. The curve won't work right if the field isn't a field (or in other words unless our candidate is actually prime). The proof has to be applied recursively to prove the order of the test group is itself prime.

Take your pick of redundant WP articles for a more technical explanation.

https://en.wikipedia.org/wiki/Primality_certificate#Atkin%E2...

https://en.wikipedia.org/wiki/Elliptic_curve_primality


I wonder if the AKS algorithm might have been faster for numbers of this size.


AKS doesn't produce a certificate so it would make the computation much less useful. Verifying a ECPP certificate is faster than running AKS for sure.

Someone could go around claiming to have proved numbers prime with AKS when really they just passed strong Baillie-PSW ... if someone ever caught their fraud they'd at least make an important discovery of a Baillie-PSW pseudoprime. :)


decimal repunit primes seem weird and less fundamental to me than binary repunit primes.

does this make me weird?


nah, not at all - every Mersenne prime is a binary repunit. i always thought those and prime pairs were the coolest.


Depends on your definition of weird. Someone who cares about primes in any way is already off of the mainstream charts to begin with.


I dunno, primes are extremely useful!


potentially stupid: what's the point of finding the prime number this big and why do we keep trying to calculate the next one


Firstly, this prime number is not so big compared to other big prime numbers, such as e.g. the largest known prime, 2^82589933 - 1, which has about 82 million bits.

About the why.. well when we'll meet the aliens, for sure it'll come down to "who has the largest prime..", and we don't want to disappoint.

https://www.mersenne.org/primes/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: