The Official Homepage on Origins of Extreme Learning Machines (ELM) |
The Official home page on the origin of the extreme learning machines (ELM)
Possible source of inspiration: Is ELM a follow-up work of the Nature article in 2004 alerting intelligent plagiarim is more harmful than literal (verbatim)
plagiarism. In particular, "How should we tackle the increasing problem of researchers rewriting others' results?"
We are pleased to inform that until Dec 2015 no technical error was reported to us since going online in April 2015.
Read these ludicrous comparison papers by G.-B. Huang with our commentaries Cog Comp 2015 and Cog Comp 2014.
Around mid July 2015, G.-B. Huang posted an email on his [email protected] emailing list.
This email was forwarded to [email protected] for our responses.
As usual, this email is meaningless and our responses can be downloaded from here.
Introduction: The objective of launching this homepage is to present the evidences regarding the tainted origins of the extreme learning machines (ELM). As we would
like all readers to verify the facts within a short period of time (perhaps 10 to 20 minutes), we have uploaded a dozen of PDF files with highlights and annotations
clearly showing the following:
1. The kernel (or constrained-optimization-based) version of ELM (ELM-Kernel, PDF: Huang-LS-SVM-2012) is identical to kernel ridge regression (for regression and
single-output classification, PDF: Saunders ICML 1998, as well as the LS-SVM with zero bias; for multiclass multi-output classification, PDF: An CVPR 2007).
2. ELM-SLFN (the single-layer feedforward network version of the ELM, PDF: Huang IJCNN 2004) is identical to the randomized neural network (RNN, with omission of bias,
PDF: Schmidt 1992) and another simultaneous work, i.e., the random vector functional link (RVFL, with omission of direct input-output links, PDF: Pao 1994). According
to the recent results, it is apparent that the older and original RVFL is far superior than the ELM for time series forecasting and classification..
3. ELM-RBF (PDF: Huang ICARCV 2004) is identical to the randomized RBF neural network (PDF: Broomhead 1988, with a performance-degrading randomization of RBF radii or
impact factors).
4. In all three cases above, Huang got his papers published after excluding a large volume of very closely related literature.
5. Hence, all 3 "ELM variants" have absolutely no technical originality, promote unethical research practices among researchers, and steal citations from original
inventors.
Huang knowingly excluded closely related works in his ELM-RBF paper (PDF: Huang ICARCV 2004) so that it could be published. If all relevant RBF works had been
correctly presented (as he had done in his non-ELM RBF paper “Huang TNN 2005” during the same period), ELM-RBF would have been rejected. ELM-SLFN was also published
after excluding a large volume of very closely related literatures (PDF: Wang Huang TNN 2008). An experimental comparison would have revealed the superior performance
of the RVFL, thereby leading to the rejection of ELM-SLFN in 2004. ELM-related activities have taken the state-of-the-art in RNN backwards to pre-RVFL time, i.e. pre-
1994. After Huang created the unethical name “ELM” during 2004-2006 by excluding prior references (i.e., without any qualitative nor quantitative comparisons with
prior methods), he started citing prior works, but incorrectly, while continuing to steal citations and credits from the original authors by pointing to his minute
variations, such as setting bias to zero or removing some connections.
We must also take note that since 2000, internet search and databases of research papers are widely available for researchers to locate closely related references with
ease. Most of the closely related literatures excluded in ELM-Kernel, ELM-RBF and ELM-SLFN were published in top tier journals/conferences and well captured by
databases and internet search engines in 2004.
A detailed description on the above issues has been submitted to IEEE for investigations as seen at this link:
http://theanonymousemail.com/view/?msg=ZHEZJ1AJ
A detailed description on the above issues has been submitted to Springer for investigations as seen at this link:
http://theanonymousemail.com/view?msg=1NOEGFQH
Importance: As thousands of junior researchers are reading ELM-related publications, they all will discover independently the tainted origins of the ELM. They all will
conclude that it is perfectly acceptable to knowingly exclude closely related papers from the reference lists in order to get their papers with tiny variation (which
may make their new method worse than the original works as in the case of ELM) published. It is the responsibility of the machine learning research community to expose
the flawed origin of the ELM and to inform the junior researchers of the consequences of such unethical practices.
No Expiration Date for Unethical Practices: IEEE republished the plagiarism committed in 1901 in 1998 to demonstrate that the severity of unethical practices does not
diminish with time. Recently two German ministers resigned as they had committed plagiarism while they were doing their PhDs decades earlier. In particular, the
question asked in relation to these two ministers was “Are these two ministers ideal role models for youngsters in this country?”. It is now the time for us to pose
the same question in relation to the inventor and promoters of the ELM.
Message for the IEEE: As a responsible and respected professional organization, the IEEE must investigate and publicize its conclusions as either (1) the activities
surrounding the ELM are unethical and must be stopped or (2) the IEEE encourages all researchers to follow the steps given below ("easy but proven steps to fame") and
submit such works to IEEE publications (given that the IEEE strictly follows an equal opportunity policy).
Easy but Proven 5 Steps to Academic Fame
1. The Brink of Genius: Take a paper published about 20 years ago (so that the original authors have either passed away, retired, or are too well-established/generous
to publicly object. Unfortunately, pioneers like Broomhead and Pao have passed away). Introduce a very minor variation, for example, by fixing one of the tunable
parameters at zero (who cares if this makes the old method worse, as long as you can claim it is now different and faster). Rewrite the paper in such a way that
plagiarism software cannot detect the similarity, so that you are not in any of the “IEEE 5 levels of plagiarism”. Give a completely new sensational name (hint: the
word “extreme” sounds extremely sexy).
2. Publication: Submit your paper(s) to a poor quality conference or journal without citing any related previous works.
3. Salesmanship: After publishing such a paper, now it is time to sell the stolen goods! Never blush. Don't worry about ethics. Get your friends/colleagues to use your
“big thing”. Put up your Matlab program for download. Organize journal special issues, conferences, etc. to promote these unethical research practices among junior
researchers who would just trust your unethical publications without bothering to read the original works published in the 1980s or 1990s. Of course, the pre-requisite
for a paper to be accepted in your special issues/conferences is 10s of citations for your unethically created name and publications. Invite big names to be associated
with your unethically created name as advisory board members, keynote speakers, or co-authors. These people may be too busy to check the details (with a default
assumption that your research is ethical) and/or too nice to say no. But, once “infected” with your unethically created name, they will be obliged to defend it for
you.
4. The Smoke Screen: Should others point out the original work, you claim not to know the literature while pointing to a minor variation that you introduced in the
first place. Instead of accepting that your work was almost the same as the literature and reverting back to the older works, you promote your work by: (1) repeating
the tiny variation; (2) excluding the almost identical works in the list of references or citing and describing them incorrectly; (3) excluding thorough experimental
comparisons with nearly identical works in the literature so that worse performance of your minute variations will not be exposed; (4) making negative statements about
competing methods and positive statements about your unethically created name without solid experimental results using words like “may” or “analysis”; (5) comparing
with apparently different methods. You can copy the theories and proofs derived for other methods and apply to your method (with tiny variation from those in the old
literature) claim that your method has got a lot of theories while others do not have.
5. Fame: Declare yourself as a research leader so that junior researchers can follow your footsteps. Enjoy your new fortune, i.e., high citations, invited speeches,
etc. You don’t need to be on the shoulders of giants, because you are a giant! All you have to do to get there is to follow these easy steps!
We can call the above steps “ICP” (Intelligent Conceptual Plagiarism) with minor variation, as opposed to Stupid (verbatim) Plagiarism. The machine learning community
should feel embarrassed if “IP” (Intelligent Plagiarism) was originally developed and/or grandiosely promoted by this community, while the community is supposed to
create other (more ethical) intelligent algorithms to benefit the mankind.
Message for EiCs, Editors and Reviewers: When you receive an ELM-related submission, please consider directing the authors to this web page and requesting them to
cite all related literature, explain them CORRECTLY and FAIRLY and experimentally compare with methods such as RVFL (with appropriate tuning) so that the truth can be
exposed soon. If only ELM promoters are invited as reviewers, they will do their best to suppress fair comparisons and descriptions of the superior methods published
in the 1980s and 1990s.
Message for Researchers: If you are investigating randomized neural networks (RNN) and/or kernel ridge regression (KRR), you should uphold ethics. This implies that
you should cite all related original literatures, explain them CORRECTLY and FAIRLY and experimentally compare with methods such as RVFL, KRR, etc. (with appropriate
tuning) so that the truth can be exposed soon.
Message for Professors: If you are teaching research ethics in a university or college, please consider ELM-related activities as a case study in your course.
Message for Attendees of ELM Conferences: During this conference, you must demand the inventor and promoters of the ELM to answer the questions posed in this page and
in all these PDF files.
A General Request: Please consider displaying a link to this page in your web pages, or better yet, hosting this webpage with all its PDF files in your own website, so
that researchers interested in the ELM will be able to locate these materials with ease. You may also include these links in your ELM paper reviews and initiate
discussions on this topic in social media, such as Facebook, linkedIn, WeChat, QQ, connectionist mailing list, etc. by referring to these weblinks:
http://theanonymousemail.com/view/?msg=ZHEZJ1AJ
http://elmorigin.wix.com/originofelm
http://theanonymousemail.com/view?msg=1NOEGFQH
ELM: The Sociological Phenomenon
Since the invention of the ELM name in 2004, the number of papers and citations on the ELM has been increasing exponentially. This phenomenon would not have been
possible without the support and participation of researchers on the fringes of machine learning. Some (unknowingly and a few knowingly) love the ELM for various
reasons:
• Some authors love the ELM, because it is always easy to publish ELM papers in an ELM conference or an ELM special issue. For example, one can simply take
a decades-old paper on a variant of RVFL or RBF and re-publish it as a variant of the ELM, after paying a small price of adding 10s of citations on Huang’s “classic
ELM papers”.
• A couple of editors-in-chiefs (EiCs) love the ELM and offer multiple special issues/invited papers, because the ELM conference & special issues will
bring a flood of papers, high citations and high impact factors to their low quality journals. The EiCs can claim to have faithfully worked within the peer-review
system, i.e. the ELM submissions are all rigorously reviewed by ELM experts.
• A few technical leaders, e.g. some IEEE society officers, love the ELM, because it rejuvenates the community by bringing in more activities and
subscriptions.
• A couple of funding agencies love the ELM, because they would fund a new sexy name, rather than anything else.
• Once associated with ELM name, without knowing the gravity of the ethical violations, EiCs (of journals that published numerous ELM articles), leaders
(who are associated with ELM conference series: ELM 2015, ELM 2014, ELM 2013, ELM 2012), ELM special issue editors and senior ELM authors would not be able to change
their views because they could never declare that "I have been associated with ELM without knowing all these violations and I was wrong". In order to insist that they
have not done anything wrong, they continue to support ELM. Hence, experts uninfected by ELM should investigate the ELM scandal.
One may ask: how can something loved by so many be wrong?
Giordano Bruno proposed that the stars were just distant suns surrounded by their own exoplanets, which was against the majority opinions among the scientists at his
time. On 17 February 1600, in the Campo de' Fiori (a central Roman market square), with his "tongue imprisoned because of his wicked words", he was burned to death.
The cardinals who judged Giordano Bruno were: Cardinal Bellarmino (Bellarmine), Cardinal Madruzzo (Madruzzi), Cardinal Camillo Borghese (later Pope Paul V), Domenico
Cardinal Pinelli, Pompeio Cardinal Arrigoni, Cardinal Sfondrati, Pedro Cardinal De Deza Manuel, Cardinal Santorio (Archbishop of Santa Severina, Cardinal-Bishop of
Palestrina).
Galileo has been called the "father of modern observational astronomy", the "father of modern physics", and the "father of modern science". Galileo's championing of
heliocentrism and Copernicanism was controversial within his lifetime, when most subscribed to geocentrism. Galileo was brought forward in 1633, and, in front of his
"betters," he was, under the threat of torture and death, forced to his knees to renounce all belief in Copernican theories, and was thereafter sentenced to
imprisonment for the remainder of his days.
A leading cause of the current Greek economic crisis was that a previous government showered its constituents with jobs and lucrative compensations, in order to gain
their votes, thereby raising the debt to an unsustainable level. At that time, the government behavior was welcome by many, but led to severe consequences. Another
example of popularity leading to a massive disaster can be found in WW II as Hitler was elected by popular votes.
The price to pay in the case of the ELM is the diminished publishing ethics, which, in a long run, will fill the research literature with renamed junk, thereby
rendering the research community and respected names, such as IEEE, Springer and Elsevier, laughing stocks. Similar to the previous Greek government and its supporting
constituents, the ELM inventor and his supporters are “borrowing” from the future of the entire research community for their present enjoyment! It is time to wake up
to your consciousness.
Our beloved peer-review system was grossly abused and failed spectacularly in the case of ELM. It is time for the machine learning experts and leaders to investigate
the allegations presented in this page and to take corrective actions soon.
Why Anonymity? The same as anonymous reviews: to avoid possible personal attacks.
Enjoy a Musical Tribute to Plagiarizers
Top ELM Promoters: We will soon provide a list of top ELM promoters. If you submit papers on RVFL or other superior RNN methods than the ELM, you may wish to request
the journal editor-in-chief to exclude the ELM inventor and top ELM promoters as reviewers.
References (These PDF files have annotations and highlights)
S. An, W. Liu, and S. Venkatesh, 2007, "Face recognition using kernel ridge regression", in CVPR 2007 : Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, IEEE, Piscataway, N.J, pp. 1-7., 2007.
D. S. Broomhead and D. Lowe, "Multivariable functional interpolation and adaptive networks," Complex Systems, vol. 2, 321-355, 1988.
S. Chen, C. F. Cowan, P. M. Grant, "Orthogonal least squares learning algorithm for radial basis function networks," IEEE Trans. Neural Networks, 2(2):302-309, 1991.
M. Fernandez-Delgado, E. Cernadas, S. Barro, and D. Amorim, "Do we need hundreds of classifiers to solve real world classification problems?" Journal of Machine
Learning Research, vol. 15, No. 1, 3133-3181, 2014.
G.-B. Huang, C.-K. Siew, "Extreme learning machine: RBF network case," Proc. ICARCV 2004, pp. 1029-1036 (Int. Conf on Control, Automation, Robotics and Vision).
G. B. Huang, C. K. Siew, "Extreme learning machine with randomly assigned RBF Kernels," Int. J of Information Technology, 11(1):16-24, 2005.
G.-B. Huang, P. Saratchandran, N. Sundararajan, "A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation," IEEE Trans on Neural
Networks, 16(1):57-67, 2005.
G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, "Extreme learning machine: A new learning scheme of feedforward neural networks," Proc. of IEEE Int. Joint Conf. on Neural
Networks, Vol. 2, 2004, pp. 985-990.
G.-B. Huang, "Reply to comments on 'the extreme learning machine'," IEEE Trans. Neural Networks, vol. 19, no. 8, pp. 1495-1496, Aug. 2008.
G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, "Extreme learning machine for regression and multiclass classification," IEEE Trans. on Systems, Man, and Cybernetics,
Part B: Cybernetics, Vol. 42, no. 2, 513-529, 2012.
G.-B. Huang, "An insight into extreme learning machines: Random neurons, random features and kernels," Cognitive Computation, Vol. 6, 376-390, 2014.
G.-B. Huang, "What are Extreme Learning Machines? Filling the Gap between Frank Rosenblatt's Dream and John von Neumann's Puzzle," Cognitive Computation, vol. 7, 2015
(Invited Paper, DOI: 10.1007/S12559-015-9333-0).
Y.H. Pao, G.H. Park, and D. J. Sobajic, "Learning and generalization characteristics of the random vector functional-link net," Neurocomputing, 6(2):163-180, 1994.
J. Park and I. W. Sandberg, "Universal approximation using radial-basis function networks," Neural Comput., vol. 3, no. 2, pp. 246-257, June 1991.
C. Saunders, A. Gammerman and V. Vovk, "Ridge Regression Learning Algorithm in Dual Variables", in Proc ICML 1998.
W. F. Schmidt, M. A. Kraaijveld, and R. P. W. Duin, “Feedforward neural networks with random weights,” Proc. of 11th IAPR Int. Conf. on Pattern Recog., Conf. B:
Pattern Recognition Methodology and Systems, Vol. 2, 1992, pp. 1–4.
L. P. Wang and C. R. Wan, "Comments on 'The extreme learning machine'," IEEE Trans. Neural Networks, Vol. 19, No. 8, 1494-1495, 2008.
H. White, "An additional hidden unit test for neglected nonlinearity in multilayer feedforward networks," Proc. of Int. conf. on Neural Networks, pp. 451-455, 1989.
Email for feedback: [email protected]
plagiarism. In particular, "How should we tackle the increasing problem of researchers rewriting others' results?"
We are pleased to inform that until Dec 2015 no technical error was reported to us since going online in April 2015.
Read these ludicrous comparison papers by G.-B. Huang with our commentaries Cog Comp 2015 and Cog Comp 2014.
Around mid July 2015, G.-B. Huang posted an email on his [email protected] emailing list.
This email was forwarded to [email protected] for our responses.
As usual, this email is meaningless and our responses can be downloaded from here.
Introduction: The objective of launching this homepage is to present the evidences regarding the tainted origins of the extreme learning machines (ELM). As we would
like all readers to verify the facts within a short period of time (perhaps 10 to 20 minutes), we have uploaded a dozen of PDF files with highlights and annotations
clearly showing the following:
1. The kernel (or constrained-optimization-based) version of ELM (ELM-Kernel, PDF: Huang-LS-SVM-2012) is identical to kernel ridge regression (for regression and
single-output classification, PDF: Saunders ICML 1998, as well as the LS-SVM with zero bias; for multiclass multi-output classification, PDF: An CVPR 2007).
2. ELM-SLFN (the single-layer feedforward network version of the ELM, PDF: Huang IJCNN 2004) is identical to the randomized neural network (RNN, with omission of bias,
PDF: Schmidt 1992) and another simultaneous work, i.e., the random vector functional link (RVFL, with omission of direct input-output links, PDF: Pao 1994). According
to the recent results, it is apparent that the older and original RVFL is far superior than the ELM for time series forecasting and classification..
3. ELM-RBF (PDF: Huang ICARCV 2004) is identical to the randomized RBF neural network (PDF: Broomhead 1988, with a performance-degrading randomization of RBF radii or
impact factors).
4. In all three cases above, Huang got his papers published after excluding a large volume of very closely related literature.
5. Hence, all 3 "ELM variants" have absolutely no technical originality, promote unethical research practices among researchers, and steal citations from original
inventors.
Huang knowingly excluded closely related works in his ELM-RBF paper (PDF: Huang ICARCV 2004) so that it could be published. If all relevant RBF works had been
correctly presented (as he had done in his non-ELM RBF paper “Huang TNN 2005” during the same period), ELM-RBF would have been rejected. ELM-SLFN was also published
after excluding a large volume of very closely related literatures (PDF: Wang Huang TNN 2008). An experimental comparison would have revealed the superior performance
of the RVFL, thereby leading to the rejection of ELM-SLFN in 2004. ELM-related activities have taken the state-of-the-art in RNN backwards to pre-RVFL time, i.e. pre-
1994. After Huang created the unethical name “ELM” during 2004-2006 by excluding prior references (i.e., without any qualitative nor quantitative comparisons with
prior methods), he started citing prior works, but incorrectly, while continuing to steal citations and credits from the original authors by pointing to his minute
variations, such as setting bias to zero or removing some connections.
We must also take note that since 2000, internet search and databases of research papers are widely available for researchers to locate closely related references with
ease. Most of the closely related literatures excluded in ELM-Kernel, ELM-RBF and ELM-SLFN were published in top tier journals/conferences and well captured by
databases and internet search engines in 2004.
A detailed description on the above issues has been submitted to IEEE for investigations as seen at this link:
http://theanonymousemail.com/view/?msg=ZHEZJ1AJ
A detailed description on the above issues has been submitted to Springer for investigations as seen at this link:
http://theanonymousemail.com/view?msg=1NOEGFQH
Importance: As thousands of junior researchers are reading ELM-related publications, they all will discover independently the tainted origins of the ELM. They all will
conclude that it is perfectly acceptable to knowingly exclude closely related papers from the reference lists in order to get their papers with tiny variation (which
may make their new method worse than the original works as in the case of ELM) published. It is the responsibility of the machine learning research community to expose
the flawed origin of the ELM and to inform the junior researchers of the consequences of such unethical practices.
No Expiration Date for Unethical Practices: IEEE republished the plagiarism committed in 1901 in 1998 to demonstrate that the severity of unethical practices does not
diminish with time. Recently two German ministers resigned as they had committed plagiarism while they were doing their PhDs decades earlier. In particular, the
question asked in relation to these two ministers was “Are these two ministers ideal role models for youngsters in this country?”. It is now the time for us to pose
the same question in relation to the inventor and promoters of the ELM.
Message for the IEEE: As a responsible and respected professional organization, the IEEE must investigate and publicize its conclusions as either (1) the activities
surrounding the ELM are unethical and must be stopped or (2) the IEEE encourages all researchers to follow the steps given below ("easy but proven steps to fame") and
submit such works to IEEE publications (given that the IEEE strictly follows an equal opportunity policy).
Easy but Proven 5 Steps to Academic Fame
1. The Brink of Genius: Take a paper published about 20 years ago (so that the original authors have either passed away, retired, or are too well-established/generous
to publicly object. Unfortunately, pioneers like Broomhead and Pao have passed away). Introduce a very minor variation, for example, by fixing one of the tunable
parameters at zero (who cares if this makes the old method worse, as long as you can claim it is now different and faster). Rewrite the paper in such a way that
plagiarism software cannot detect the similarity, so that you are not in any of the “IEEE 5 levels of plagiarism”. Give a completely new sensational name (hint: the
word “extreme” sounds extremely sexy).
2. Publication: Submit your paper(s) to a poor quality conference or journal without citing any related previous works.
3. Salesmanship: After publishing such a paper, now it is time to sell the stolen goods! Never blush. Don't worry about ethics. Get your friends/colleagues to use your
“big thing”. Put up your Matlab program for download. Organize journal special issues, conferences, etc. to promote these unethical research practices among junior
researchers who would just trust your unethical publications without bothering to read the original works published in the 1980s or 1990s. Of course, the pre-requisite
for a paper to be accepted in your special issues/conferences is 10s of citations for your unethically created name and publications. Invite big names to be associated
with your unethically created name as advisory board members, keynote speakers, or co-authors. These people may be too busy to check the details (with a default
assumption that your research is ethical) and/or too nice to say no. But, once “infected” with your unethically created name, they will be obliged to defend it for
you.
4. The Smoke Screen: Should others point out the original work, you claim not to know the literature while pointing to a minor variation that you introduced in the
first place. Instead of accepting that your work was almost the same as the literature and reverting back to the older works, you promote your work by: (1) repeating
the tiny variation; (2) excluding the almost identical works in the list of references or citing and describing them incorrectly; (3) excluding thorough experimental
comparisons with nearly identical works in the literature so that worse performance of your minute variations will not be exposed; (4) making negative statements about
competing methods and positive statements about your unethically created name without solid experimental results using words like “may” or “analysis”; (5) comparing
with apparently different methods. You can copy the theories and proofs derived for other methods and apply to your method (with tiny variation from those in the old
literature) claim that your method has got a lot of theories while others do not have.
5. Fame: Declare yourself as a research leader so that junior researchers can follow your footsteps. Enjoy your new fortune, i.e., high citations, invited speeches,
etc. You don’t need to be on the shoulders of giants, because you are a giant! All you have to do to get there is to follow these easy steps!
We can call the above steps “ICP” (Intelligent Conceptual Plagiarism) with minor variation, as opposed to Stupid (verbatim) Plagiarism. The machine learning community
should feel embarrassed if “IP” (Intelligent Plagiarism) was originally developed and/or grandiosely promoted by this community, while the community is supposed to
create other (more ethical) intelligent algorithms to benefit the mankind.
Message for EiCs, Editors and Reviewers: When you receive an ELM-related submission, please consider directing the authors to this web page and requesting them to
cite all related literature, explain them CORRECTLY and FAIRLY and experimentally compare with methods such as RVFL (with appropriate tuning) so that the truth can be
exposed soon. If only ELM promoters are invited as reviewers, they will do their best to suppress fair comparisons and descriptions of the superior methods published
in the 1980s and 1990s.
Message for Researchers: If you are investigating randomized neural networks (RNN) and/or kernel ridge regression (KRR), you should uphold ethics. This implies that
you should cite all related original literatures, explain them CORRECTLY and FAIRLY and experimentally compare with methods such as RVFL, KRR, etc. (with appropriate
tuning) so that the truth can be exposed soon.
Message for Professors: If you are teaching research ethics in a university or college, please consider ELM-related activities as a case study in your course.
Message for Attendees of ELM Conferences: During this conference, you must demand the inventor and promoters of the ELM to answer the questions posed in this page and
in all these PDF files.
A General Request: Please consider displaying a link to this page in your web pages, or better yet, hosting this webpage with all its PDF files in your own website, so
that researchers interested in the ELM will be able to locate these materials with ease. You may also include these links in your ELM paper reviews and initiate
discussions on this topic in social media, such as Facebook, linkedIn, WeChat, QQ, connectionist mailing list, etc. by referring to these weblinks:
http://theanonymousemail.com/view/?msg=ZHEZJ1AJ
http://elmorigin.wix.com/originofelm
http://theanonymousemail.com/view?msg=1NOEGFQH
ELM: The Sociological Phenomenon
Since the invention of the ELM name in 2004, the number of papers and citations on the ELM has been increasing exponentially. This phenomenon would not have been
possible without the support and participation of researchers on the fringes of machine learning. Some (unknowingly and a few knowingly) love the ELM for various
reasons:
• Some authors love the ELM, because it is always easy to publish ELM papers in an ELM conference or an ELM special issue. For example, one can simply take
a decades-old paper on a variant of RVFL or RBF and re-publish it as a variant of the ELM, after paying a small price of adding 10s of citations on Huang’s “classic
ELM papers”.
• A couple of editors-in-chiefs (EiCs) love the ELM and offer multiple special issues/invited papers, because the ELM conference & special issues will
bring a flood of papers, high citations and high impact factors to their low quality journals. The EiCs can claim to have faithfully worked within the peer-review
system, i.e. the ELM submissions are all rigorously reviewed by ELM experts.
• A few technical leaders, e.g. some IEEE society officers, love the ELM, because it rejuvenates the community by bringing in more activities and
subscriptions.
• A couple of funding agencies love the ELM, because they would fund a new sexy name, rather than anything else.
• Once associated with ELM name, without knowing the gravity of the ethical violations, EiCs (of journals that published numerous ELM articles), leaders
(who are associated with ELM conference series: ELM 2015, ELM 2014, ELM 2013, ELM 2012), ELM special issue editors and senior ELM authors would not be able to change
their views because they could never declare that "I have been associated with ELM without knowing all these violations and I was wrong". In order to insist that they
have not done anything wrong, they continue to support ELM. Hence, experts uninfected by ELM should investigate the ELM scandal.
One may ask: how can something loved by so many be wrong?
Giordano Bruno proposed that the stars were just distant suns surrounded by their own exoplanets, which was against the majority opinions among the scientists at his
time. On 17 February 1600, in the Campo de' Fiori (a central Roman market square), with his "tongue imprisoned because of his wicked words", he was burned to death.
The cardinals who judged Giordano Bruno were: Cardinal Bellarmino (Bellarmine), Cardinal Madruzzo (Madruzzi), Cardinal Camillo Borghese (later Pope Paul V), Domenico
Cardinal Pinelli, Pompeio Cardinal Arrigoni, Cardinal Sfondrati, Pedro Cardinal De Deza Manuel, Cardinal Santorio (Archbishop of Santa Severina, Cardinal-Bishop of
Palestrina).
Galileo has been called the "father of modern observational astronomy", the "father of modern physics", and the "father of modern science". Galileo's championing of
heliocentrism and Copernicanism was controversial within his lifetime, when most subscribed to geocentrism. Galileo was brought forward in 1633, and, in front of his
"betters," he was, under the threat of torture and death, forced to his knees to renounce all belief in Copernican theories, and was thereafter sentenced to
imprisonment for the remainder of his days.
A leading cause of the current Greek economic crisis was that a previous government showered its constituents with jobs and lucrative compensations, in order to gain
their votes, thereby raising the debt to an unsustainable level. At that time, the government behavior was welcome by many, but led to severe consequences. Another
example of popularity leading to a massive disaster can be found in WW II as Hitler was elected by popular votes.
The price to pay in the case of the ELM is the diminished publishing ethics, which, in a long run, will fill the research literature with renamed junk, thereby
rendering the research community and respected names, such as IEEE, Springer and Elsevier, laughing stocks. Similar to the previous Greek government and its supporting
constituents, the ELM inventor and his supporters are “borrowing” from the future of the entire research community for their present enjoyment! It is time to wake up
to your consciousness.
Our beloved peer-review system was grossly abused and failed spectacularly in the case of ELM. It is time for the machine learning experts and leaders to investigate
the allegations presented in this page and to take corrective actions soon.
Why Anonymity? The same as anonymous reviews: to avoid possible personal attacks.
Enjoy a Musical Tribute to Plagiarizers
Top ELM Promoters: We will soon provide a list of top ELM promoters. If you submit papers on RVFL or other superior RNN methods than the ELM, you may wish to request
the journal editor-in-chief to exclude the ELM inventor and top ELM promoters as reviewers.
References (These PDF files have annotations and highlights)
S. An, W. Liu, and S. Venkatesh, 2007, "Face recognition using kernel ridge regression", in CVPR 2007 : Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, IEEE, Piscataway, N.J, pp. 1-7., 2007.
D. S. Broomhead and D. Lowe, "Multivariable functional interpolation and adaptive networks," Complex Systems, vol. 2, 321-355, 1988.
S. Chen, C. F. Cowan, P. M. Grant, "Orthogonal least squares learning algorithm for radial basis function networks," IEEE Trans. Neural Networks, 2(2):302-309, 1991.
M. Fernandez-Delgado, E. Cernadas, S. Barro, and D. Amorim, "Do we need hundreds of classifiers to solve real world classification problems?" Journal of Machine
Learning Research, vol. 15, No. 1, 3133-3181, 2014.
G.-B. Huang, C.-K. Siew, "Extreme learning machine: RBF network case," Proc. ICARCV 2004, pp. 1029-1036 (Int. Conf on Control, Automation, Robotics and Vision).
G. B. Huang, C. K. Siew, "Extreme learning machine with randomly assigned RBF Kernels," Int. J of Information Technology, 11(1):16-24, 2005.
G.-B. Huang, P. Saratchandran, N. Sundararajan, "A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation," IEEE Trans on Neural
Networks, 16(1):57-67, 2005.
G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, "Extreme learning machine: A new learning scheme of feedforward neural networks," Proc. of IEEE Int. Joint Conf. on Neural
Networks, Vol. 2, 2004, pp. 985-990.
G.-B. Huang, "Reply to comments on 'the extreme learning machine'," IEEE Trans. Neural Networks, vol. 19, no. 8, pp. 1495-1496, Aug. 2008.
G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, "Extreme learning machine for regression and multiclass classification," IEEE Trans. on Systems, Man, and Cybernetics,
Part B: Cybernetics, Vol. 42, no. 2, 513-529, 2012.
G.-B. Huang, "An insight into extreme learning machines: Random neurons, random features and kernels," Cognitive Computation, Vol. 6, 376-390, 2014.
G.-B. Huang, "What are Extreme Learning Machines? Filling the Gap between Frank Rosenblatt's Dream and John von Neumann's Puzzle," Cognitive Computation, vol. 7, 2015
(Invited Paper, DOI: 10.1007/S12559-015-9333-0).
Y.H. Pao, G.H. Park, and D. J. Sobajic, "Learning and generalization characteristics of the random vector functional-link net," Neurocomputing, 6(2):163-180, 1994.
J. Park and I. W. Sandberg, "Universal approximation using radial-basis function networks," Neural Comput., vol. 3, no. 2, pp. 246-257, June 1991.
C. Saunders, A. Gammerman and V. Vovk, "Ridge Regression Learning Algorithm in Dual Variables", in Proc ICML 1998.
W. F. Schmidt, M. A. Kraaijveld, and R. P. W. Duin, “Feedforward neural networks with random weights,” Proc. of 11th IAPR Int. Conf. on Pattern Recog., Conf. B:
Pattern Recognition Methodology and Systems, Vol. 2, 1992, pp. 1–4.
L. P. Wang and C. R. Wan, "Comments on 'The extreme learning machine'," IEEE Trans. Neural Networks, Vol. 19, No. 8, 1494-1495, 2008.
H. White, "An additional hidden unit test for neglected nonlinearity in multilayer feedforward networks," Proc. of Int. conf. on Neural Networks, pp. 451-455, 1989.
Email for feedback: [email protected]