After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets. (The proposal can be viewed as a PDF, and is also available on Overleaf). As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets. (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute. Feel free to mention further relevant data sets in the comments.)
This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on. In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in. (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)
UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.
UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page). The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.
60 comments
Comments feed for this article
25 March, 2020 at 8:22 am
adityaguharoy
Unfortunately, the overleaf link shows that entry to the page is restricted. Could you share it publicly please ?
[PDF link also added. -T]
25 March, 2020 at 8:27 am
Lars Ericson
This is an excellent COVID data collection: https://ourworldindata.org/coronavirus
25 March, 2020 at 8:35 am
adityaguharoy
I think, World Health Organisation (WHO) can provide us with valuable information we want to get hold of.
Recently, I have been aware of the fact that WHO has launched online support to individuals via communication through social platforms such as Whatsapp and Telegram.
Given that, one can seek information regarding some common concerns of the individual, and also seek information regarding the data that is being provided to WHO via such medium.
25 March, 2020 at 8:47 am
adityaguharoy
Reblogged this on 1. Mathematics Scouts.
25 March, 2020 at 8:52 am
David Fry
Terry,
I have noticed over time that you are beginning to use your God-given talents for the good of others, more and more. To me, it is simply your, conscious or subconscious, realization of God’s purpose for you: to use your talent to help others. If you keep widening this realization, your platform for reaching and influencing others will become World wide, Then, you can Really change the World for the better.
25 March, 2020 at 5:37 pm
Anonymous
I agree with you. I wonder why all biology scientists and doctors over the world not combine with Pro.Tao ‘s intelligent to invent vaccine to kill virus. Try one time! Pro.Tao is very talented in many fields. I always belive in Pro.Tao.
25 March, 2020 at 9:33 am
Anonymous
A possible method to “clean” the data is to use sufficiently realistic parametric (possibly stochastic) models for the epidemic dynamics and estimate the parameters (including possible data “noise” parameters) which should help to remove most of the “noise” from the data.
25 March, 2020 at 9:52 am
carcar
The available data are quite messy; in particular there is no homogeneity in the methods followed to collect them. This has awkward results, such as a huge discrepancy in the mortality rate:
https://www.theguardian.com/world/2020/mar/22/germany-low-coronavirus-mortality-rate-puzzles-experts?fbclid=IwAR0aoxfS5O9ZHbHkigS2PdSBRpPBySvy1yhiWIUuAzT2e8dW4BOv7-HzBZ4
Therefore it is not clear how we could possibly ‘polish the data’.
25 March, 2020 at 10:45 am
Peter Morgan
There is a github dataset that seems fairly clean here: https://github.com/datasets/covid-19
25 March, 2020 at 10:55 am
popcubed
I’m sure the following sources are well known but perhaps not to all. The data for the Johns Hopkins COVID-19 dashboard is openly available at Github (https://github.com/CSSEGISandData/COVID-19) in convenient csv format. Similarly for the covid tracking data (https://github.com/COVID19Tracking/covid-tracking-data), though it applies only to the US. Both are curated to an extent but are limited by the sources on which they depend.
25 March, 2020 at 11:10 am
YY Ahn
Here is another one: https://github.com/covid19-data/covid19-data We aim to make the whole pipeline open and transparent. Countries and state names are normalized with ISO 3166-1 code.
25 March, 2020 at 11:42 am
Terence Tao
I’ve started a wiki page at http://michaelnielsen.org/polymath1/index.php?title=COVID-19_dataset_clearinghouse to collate these links. Thanks for all the contributions!
25 March, 2020 at 11:51 am
Terence Tao
As an illustration of the types of requests we envision the clearinghouse could assist with, Chris has sent me an actual example of a data cleaning request (tabulating all recent COVID-19 research articles) that their group could use as an illustration. For now I’ve placed it on the above wiki page at http://michaelnielsen.org/polymath1/index.php?title=COVID-19_dataset_clearinghouse#From_Chris_Strohmeier.2C_Mar_25 . Presumably if we scale this project up a wiki is not going to be the ideal platform, but for sake of “proof of concept” this may suffice for now.
26 March, 2020 at 8:58 am
Thomas Vu
Dear Professor Tao,
I am the founder of a Polymath Project-inspired open science startup called AsOne.
My mission was to create a scalable platform which could house hundreds of Polymath-style Projects simultaneously.
Topics on our platform are organized as a tree. Currently under our COVID-19 topic, we are providing the forum for a citizen science project called 1 Million Ventilators. We would LOVE to house your dataset clearinghouse as one of our topics.
Here is our COVID-19 topic page:
https://asone.ai/topic/covid19
Please consider us as your platform!
Sincerely,
Thomas Vu
26 March, 2020 at 9:25 am
Thomas Vu
Here is what the page looks like on our platform:
https://asone.ai/a/C19Datasets
We are a fast-moving startup and can implement whichever features are needed quickly.
My email is thomas@asone.ai for anyone who wants to get in touch.
26 March, 2020 at 12:46 pm
Terence Tao
Thanks for this! At present it does not seem like activity is scaling up to require such a platform, but if there are many more data cleaning requests and cleaned data submissions then we would probably need a more sophisticated platform to evaluate submissions, request clarification on requests, and the like.
26 March, 2020 at 1:11 pm
Thomas Vu
Please keep us in mind if the requests and submissions start to pile up!
We are trying to get the word out about our platform, and hope to become the central hub of all Polymath-style Projects.
27 March, 2020 at 12:02 pm
Michael Nielsen
Unfortunately, due to spam and hosting restrictions, at the moment on the Polymath wiki new accounts are manually created by admins (mostly: me). This is slow and inconvenient. Much of this is due to the nature of the hosting – I use a shared, commercial host, and don’t have the expertise to run my own host.
One possibility: would you or someone else with the expertise be able to provide long-term hosting for the Polymath wiki? We could potentially migrate content there, and add some kind of redirect from the existing URLs.
27 March, 2020 at 2:21 pm
Thomas Vu
Michael, you are the exact person I wanted to get in touch with!
> would you or someone else with the expertise be able to provide long-term hosting for the Polymath wiki?
Yes, absolutely! That would be incredible. Please let me know how I can help the Polymath wiki in any way.
Could you open your DMs to me on Twitter?
27 March, 2020 at 2:25 pm
Thomas Vu
Sorry, I didn’t know linking to Twitter would cause it to embed itself like that!
28 March, 2020 at 12:01 pm
Michael Nielsen
Email sent.
25 March, 2020 at 12:31 pm
Anonymous
This is a good idea but I worry that many of the participants will start with certain preconceived ideas about the findings that they want to find, for instance about the lethality of COVID-19 in young populations. Since this is not a purely mathematical project it is easier to torture the data towards the finding one desires. Thus it is particularly important to start the project without preconceived ideas. This is difficult in the current media climate. Moreover because of the actions already taken by many governments there is a certain societal pressure to conclude that COVID-19 is an exceptional danger to humanity.
25 March, 2020 at 1:26 pm
Gönenç Onay
Good news. This is, of course, a must. Most of the current data is biased.
I would also say that some of us should also work on applications of -clean(ed)- data.
Here there is one by Yoshua Bengio (Turing Award 2018), which seems to me promising:
https://yoshuabengio.org/2020/03/23/peer-to-peer-ai-tracing-of-covid-19/
Last, maybe the most important: we have to warn some of our colleagues.
I think, we are not in a moment where one can benefit for enhancing his/her publication record. There are already about 25K corona-virus publications.
We should better organize our collaboration: maybe an international statement recalling the ethics of scientific work can be useful.
25 March, 2020 at 1:51 pm
Anonymous
Please be aware of this, though it’s about research papers rather than data per se: https://www.ncbi.nlm.nih.gov/research/coronavirus/
25 March, 2020 at 5:14 pm
Anonymous
https://coronavirus.1point3acres.com/en
26 March, 2020 at 2:57 pm
Anonymous
This 1point3acres site seems to be made by lots of engineers and scientists. They might be willing to share their raw data.
25 March, 2020 at 9:06 pm
JosephSugar
Given that German media such as ZDF sometimes share with the viewers astoundingly precise scientific data and projections on COVID, I am sure that Germany is way ahead in this project. Might as well just ask the Bundesamt fuer Statistik or the Robert Koch Institut to share their database.
26 March, 2020 at 3:58 am
adityaguharoy
https://www.mygov.in/covid-19/ The current updates about the figures for India can be found by visiting the link above. Couldn’t update it on the wiki page unfortunately.
[Added, thanks. -T]
26 March, 2020 at 10:10 am
RSamyak
There’s a dataset at the bottom of this page: http://www.covid19india.org, this is also crowdsourced
[Added, thanks – T.]
27 March, 2020 at 4:47 am
Lars Ericson
This tracks testing and is updated frequently: https://covidtracking.com/data/
[Added, thanks – T.]
27 March, 2020 at 8:46 am
juan
Hello.
We need much more detail (date when each person was diagnosed, date of infection for the same person, discharge date, date of death, gender, age, treatments, temperatures…) not just summaries.
I mean we need information (on a person basis) to perform survival analyses, regressions with random effects…
I’ve found some data
https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/data
https://www.kaggle.com/kimjihoo/coronavirusdataset
https://www.kaggle.com/imdevskp/covid-19-analysis-visualization-comparisons/data
https://www.sirm.org/category/senza-categoria/covid-19/
But they don’t provide enough information yet. It’s easy to find data aggregated by countries, but they are just summaries.
27 March, 2020 at 9:56 am
Terence Tao
I’ve added your request to the wiki page, but as per my other comment, you may wish to also submit your request to https://www.united-against-covid.org/ (probably by posting at https://discourse.data-against-covid.org/c/i-have-data/15 ).
The UCLA hospitals do have records of this type for the COVID patients passing through their system, but for reasons of patient privacy they are unfortunately not available to researchers outside of UCLA. (Potentially this can be solved by anonymizing the data, but this cannot be crowdsourced, for obvious reasons.)
27 March, 2020 at 9:26 am
Terence Tao
We’ve been contacted by the organisers of the United against COVID-19 project, which is an existing clearinghouse for data, data scientists, and medical researchers working on COVID-19 projects. The data cleaning aspect of this polymath proposal appears to basically be a duplicate of this existing project, so it seems to make sense to redirect these efforts to that site in order to not fragment the pool of requests and volunteers. On the other hand, this polymath project has already been rather successful in assembling a list of public data sets relating to COVID-19, so perhaps the thing to do is to refocus the polymath proposal down to this narrower goal, and refer subsequent data cleaning requests to the “United against COVID-19” project. (But we can still use this comment thread to also make requests and answer them.)
27 March, 2020 at 12:43 pm
Terence Tao
Now that the polymath project is focusing on curating public COVID-19 data sets, I would like to start a discussion about what features would be desirable for a platform to host this collection. Right now we are using an ad hoc wiki page to list the collection, grouped into rough categories, but as the list grows it will become difficult to rapidly search for an appropriate data set for one’s needs, and we should eventually migrate to something more scaleable than a wiki page. (And there are entire categories of data not yet present: for instance regarding the impact of COVID-19 on other aspects of wellness like mental health.)
One could envision a searchable “database of databases” where each database in the list has various metadata attached to it regarding the type of data, the sources used, the host organisation, relations to other databases (e.g., if one is a subset or cleaned version of the other) and so forth, with the ability to crowdsource various annotations and commentary on each such database. I don’t know if there are any existing platforms that are suitable for this sort of structure; hopefully others here may be able to suggest some.
27 March, 2020 at 2:44 pm
Thomas Vu
Terry,
We at AsOne are working hard to create the type of platform you are describing. We allow nesting of categories for better organization.
I have recreated the “possible layout” described in the COVID-19 Polymath Project Proposal PDF to better demonstrate our platform.
https://asone.ai/topic/C19Datasets
We are willing to implement any desired features from the resulting discussion into our platform!
29 March, 2020 at 8:41 am
Mark Wainwright
There’s CKAN (ckan.org), if it’s not too heavy duty.
28 March, 2020 at 8:04 am
Jan Funke
Martin (@headsortails on data-against-covid.org) suggested using Kaggle to publish curated datasets and offered help to do so. I think this is a great idea. In fact, there is a request for collecting and curating datasets on Kaggle itself already (in the form of a challenge: https://www.kaggle.com/data/139140).
Thomas, maybe we can aim for a hybrid solution: AsOne provides a category browser, the leave nodes would then point to the Kaggle datasets?
28 March, 2020 at 8:16 am
Jan Funke
By the way, [here](https://discourse.data-against-covid.org/t/request-for-help-build-a-platform-to-catalogue-covid-19-related-datasets/750/2) is the related thread on data-against-covid.org, just that you are aware. Feel free to move the discussion there, if you think that makes sense.
28 March, 2020 at 9:48 am
Thomas Vu
I am definitely open to a hybrid solution! I agree that Kaggle has many advantages for hosting a data science focused effort.
AsOne has a more broad focus, under our COVID-19 topic we currently house a citizen science project for rapid distribution of ventilators, and we would like to create a category to gather the evidence around possible treatments. We’re trying to become the platform where all COVID-19 research and relief efforts can be collaborated on in a more organized fashion. I believe a true “COVID-19 Polymath Project” should encompass the whole problem at hand.
That said, I would love to coordinate with you and Martin on the dataset clearinghouse front over on Kaggle if that’s where we decide to go!
28 March, 2020 at 11:50 am
headsortails (Martin)
Hi both, I’m happy to help in adding the data to the Kaggle platform.
A major advantage of Kaggle is that they already host a lot of datasets with e.g. demographic or geospatial information on many different countries, which could be easily joined to more COVID-19 specific data for a more comprehensive analysis. The Kaggle platform is very mature and has robust search and tagging functionality. In addition, analysis notebooks in R/Python can be hosted together with the data to illustrate the extent of a dataset or provide (collaborative) analysis capabilities.
I had a look through the wiki that Jan linked, and I think that most of the datasets are already on Kaggle in some shape or form. There is a certain focus on US numbers, but Kaggle has a very international community and COVID-19 data from multiple other countries is also already present.
Maybe you guys can have a look at what’s already there: https://www.kaggle.com/datasets?search=covid-19
And then we can coordinate how to prioritise adding the missing datasets, and how to best maintain them / add future ones.
We can either continue the thread here or over at data-against-covid, which might be a bit better equipped to handle more detailed discussion:
https://discourse.data-against-covid.org/t/request-for-help-build-a-platform-to-catalogue-covid-19-related-datasets/750
Cheers, Martin
29 March, 2020 at 8:00 am
Terence Tao
Thanks for this! The Kaggle platform does have a lot of nice features, with each data set coming with a page that can contain a lot of auxiliary data, such as the notebooks you mention. But my view of this clearinghouse is at the next level up of organisation; it isn’t intended to host data directly, but instead to crowdsource links to data at other sources (including Kaggle), and to also create various “directory” pages and other search features to locate subcollections of data easily; I’m thinking something resembling the Wikipedia model where there are various “lists of X” pages, that can be independently curated by different sets of people with varying degrees of automation. To give just one example, one could imagine a page devoted to social data related to COVID-19 (on the existing clearinghouse there is already some data on foot traffic and on twitter feeds). One could have a page devoted to COVID data in a specific country. And so forth; presumably there would be datasets that would show up in multiple such pages (right now the clearinghouse is organised in a tree structure where each data set only shows up once).
Another desirable feature would be for each data set to come with its own dedicated web page on the clearinghouse, that gives further metadata such as a description of the data set, any restrictions on use, the institutions or individuals involved in maintaining it, any gaps or other issues with the data, and any dependencies on other data sets (e.g., whether the data set is a cleaned version of another data set, or is part of a larger corpus that also has its own page on the site). Plus anything else that might be relevant (commentary, example code, APIs, etc.). A dedicated wiki (with some templates for data set pages containing the above sort of information) might already be good enough for this.
EDIT: On thinking about it a bit more, one problem with having too much metadata associated to each data set is that it makes it harder to casually add a data set to the clearinghouse. Perhaps one needs some sort of “submit” button where anyone without any special training can submit a new dataset with a form where a lot of entries can be left blank, and then later on the metadata can be cleaned up (and possibly merged with another dataset in case of duplicates) by someone else who is more familiar with the site. (The OEIS roughly follows this sort of model for instance.)
28 March, 2020 at 8:36 am
Lars Ericson
Maybe not a nice logistic function. Should we fit or ignore this curve? https://nypost.com/2020/03/28/shipments-of-urns-in-wuhan-raise-questions-about-chinas-coronavirus-reporting/
2 April, 2020 at 3:08 pm
Daniel Hayes
Over the last 2 weeks up to this time (Friday 3/4/2020 10 am, Melbourne Australia time) the total worldwide deaths from corona virus has pretty much followed a logistic model on using the data from https://ourworldindata.org/coronavirus.
I constructed the model a week ago using data from 2 weeks to a week ago and it pretty much predicted last week’s data.
The logistic model used is dp/dt = a p – b p^2 where p(80)=8843 and a=.1293655842, b=4.382456229*10^(-7).
Note that the data here is changing too with time in an unexpected way, eg p(80) was 8843 a week ago, now it is p(80) =8842. At this time we are currently at p(94)=46891 and the model above gives about p(94)=46905.
If the model is good for another week then in one week’s time we will be at about p(101)=94000.
28 March, 2020 at 3:09 pm
Alex Selby
I’m concerned that many people and institutions are using the JHU dataset https://github.com/CSSEGISandData/COVID-19, presumably because it is in a convenient form, but it has some serious errors in it that are not being corrected. For example half of the March 12th entries (109 countries) are wrong as they erroneously copy across data from March 11th. (This is not the only error.) These errors are carried over into derived datasets such as the one listed on the clearinghouse wiki: https://github.com/datasets/covid-19.
These errors remain, apparently unnoticed by JHU, on the JHU issue tracker. (I’ve tried contacting them by email, but just get an autoresponder.)
I suggest that the polymath wiki puts a warning flag against these datasets so that people may decide for themselves if such errors are acceptable or fatal for their analysis. Also, I’m hoping such a note may spur the creation of a corrected fork of the JHU dataset (or even nudge JHU into correcting it).
29 March, 2020 at 5:00 am
Lars Ericson
NSF funding up to $200K immediately available for COVID-19 modelling. https://www.nsf.gov/pubs/2020/nsf20052/nsf20052.jsp
29 March, 2020 at 8:30 am
Mark Wainwright
There is a similar effort as part of the Coronavirus Tech Handbook:
https://coronavirustechhandbook.com/data
You might wish to note this, link to it, add datasources directly to it etc (it is easy to edit and welcomes all help).
29 March, 2020 at 10:46 am
Richard Séguin
The New York Times has been maintaining a .csv file recording the cumulative number of confirmed cases and deaths in the U.S by state, county, and date:
https://github.com/nytimes/covid-19-data
If you click on the Raw tab in https://github.com/nytimes/covid-19-data/blob/master/us-counties.csv you should get to another page that you can save as a .csv file (ie., in your browser, select File/Save Page).
This data is apparently the source for their very detailed map:
[Added, thanks – T.]
29 March, 2020 at 12:06 pm
Lars Ericson
Vaccine and treatment tracker: https://milkeninstitute.org/covid-19-tracker
[Added, thanks – T.]
29 March, 2020 at 4:40 pm
Terence Tao
I’ve asked Michael Nielsen (who maintains the PolymathWiki) to temporarily open up account registration on the wiki so that it is easier for people to contribute to the clearinghouse. We may lock it down again if we encounter significant spam problems (which is why account creation was locked down in the first place).
EDIT: OK, the spam account creation started more or less immediately, we will soon lock it back down again.
30 March, 2020 at 1:59 pm
Larry Adams
Not sure if this is useful. https://midasnetwork.us/covid-19/ has some data aimed at COVID-19 modelling research complete with documented metadata.
[Added, thanks – T.]
9 April, 2020 at 10:15 pm
Some Select COVID-19 Modeling Resources | R-bloggers
[…] Tao’s March 25th post which announces the Christopher Strohmeier COVID-19 Polymath Proposal initiated a valuable […]
4 May, 2020 at 10:06 pm
Alberto Ibañez
Hello everyone from Spain. We have been hit hard by this pandemic, with many human and economic losses, and our health system has overflowed in large cities, and nursing homes have been severely damaged.
This is not the time or the place to discuss whether our politicians or our scientists have failed. After stopping our economy almost entirely and staying the whole country confined at home, we started the de-escalation phase.
Most of us want to leave home, but the fear of possible outbreaks grips us, with our health personnel very tired and depleted.
I want to trust our experts, but I cannot do it, I do not feel safe, I do not usually blindly trust, I need to see the plan clearly, I need some mathematical certainty that we can control the epidemic and how it will be done, but they do not give such information. accurate.
At this point I want to understand that there must be an algorithm that allows us to know if we can control the pandemic, with the focus on not exceeding the capacity of the health system and monitoring the most vulnerable population, a kind of day counter for collapse.
an algorithm that seems to be a good implementation for an AI.
Maybe it’s already being done, but since I don’t know, I don’t feel safe
This algorithm should be open source so that all countries could use and improve it, and find the most optimal possible. It would give us a lot of peace of mind.
taking into account each health area, its availability of beds and respirators, available protective material and masks, available health personnel, the population, the ability to accurately test and monitor the epidemic situation, assuming the minimum risk possible, or any other variable of importance, and with the known epidemiological data, without being an expert in the field, I do not think it is so difficult to create this collapse alert system, with enough time to take measures to avoid it, such as isolation of infected population, … Of course always making use of masks, social distancing and hand hygiene.
surely this optimal algorithm exists and is open source and available to all countries, and above all reliable. That would give us much peace of mind, much more than the words of politicians.
but if it does not exist, it should exist, for optimal and safe de-escalation. and if it does not exist, it is you, mathematicians, computer scientists, scientists, … who can make it possible. Thank you all.
7 May, 2020 at 3:50 am
Robert Clark
Big-data to search for COVID-19 cures.
Hello, Prof. Tao. I wanted to discuss with you a topic that I thought you as a highly regarded mathematician could help with. It’s in regard to the COVID-19 crisis. I would have thought it obvious that one approach to address a search for cures is to look for medications in the collected medical histories of COVID-19 patients either for medications absent from that list, suggesting they may be protective, or ones appearing in common in patients with positive outcomes, suggesting they may be curative.
Yet, with now over 1 million cases and tens of thousands of deaths in the U.S. this still has not been done. Back in March there were only a few tens of thousands of cases, and a few hundred deaths. If this had been done then likely an answer could have been found within a matter of days. And in an outbreak in its exponential growth phase, every day is vital.
I have been arguing for this to be done since March, but I am only a lone voice in the wilderness, and not a very loud voice. I thought you with a much louder voice could make it clear this in an obvious, low cost, and quick means of searching for cures.
Some discussions on the topic:
Big Data to fight COVID-19 and Other Diseases.
View at Medium.com
This searches for cures in the reverse sense, by looking for medications absent from the collected medical histories of patients.
Big Data to fight COVID-19 and Other Diseases, Page 2.
View at Medium.com
This searches for cures in the direct sense, by looking for medications that patients with positive outcomes have in common.
Thank You,
Robert Clark
___________________________
Robert Clark
Dept. of Mathematics
Widener University
One University Place
Chester, PA 19013 USA
___________________________
7 May, 2020 at 9:38 am
David Fry
David Fry @Theorist Papers
Terry, I said it before, and please allow me to say it again:
I have noticed over time that you are beginning to use your God-given talents for the good of others, more and more. To me, it is simply your, conscious or subconscious, realization of God’s purpose for you: to use your talent to help others. If you keep widening this realization, your platform for reaching and influencing others will become World wide, Then, you can Really change the World for the better.
It’s happening right now, and here it is,
David
14 May, 2020 at 9:08 pm
Lars Ericson
Good observations here: https://hjstein.blogspot.com/2020/05/covid-19-data-collection-garbage-in_33.html?m=1
21 May, 2020 at 9:10 am
Lars Ericson
Note this COVID-19 DREAM Challenge is just starting: https://www.synapse.org/#!Synapse:syn21849255/wiki/601865
3 June, 2020 at 10:53 am
Lars Ericson
FYI, Government money: https://beta.sam.gov/opp/d73216413c334184a7bc1df690946ee7/view?keywords=IARPA&sort=-relevance&index=&is_active=true&page=1
1 October, 2020 at 5:26 am
Zcp
Who can be believed? https://www.un.org/en/coronavirus
1 October, 2020 at 3:12 pm
I am not chiNA 007
American, come on! For human being, it is a terrible disaster!
3 March, 2021 at 9:35 am
Anonymous
Collatz
Could a Collatz like process explain why there is matter and not anti-matter in our universe. I was never satisfied with the mutual destruction explanation. But a Collatz process that skewed everything to “matter” makes more sense to me.