John Trono Emeritus Professor of Computer Science

Bio
Education:
M.S. Purdue University
B.S. University of Vermont
Areas of Expertise:
Simulation and predictive modeling; minimal perfect hashing functions; computer science education, concurrent programming using semaphores; Sidon set discovery; the MIPS architecture, analysis of algorithms and cryptography
Courses I Teach:
- Computer Architecture
- Crypto/Security
- Data Communications and Networks
- Intro to Computer Science
- Operating Systems
My Saint Michael’s:
I came to Saint Michael’s College when the Computer Science department began back in 1982. I use my computer (which is not just for e-mail and searching the Web!) as a tool to solve problems that involve a significant amount of tedious calculations. Many of these problems require a mathematical model to simulate inside the computer what is happening in the real world. The computer can then be used to evaluate these “virtual worlds”, and examine their ability to predict the future. The computer can also be used to help determine how realistic these models are in relation to our own physical world. In my classes, if I see that some topics are very difficult for students to learn, I try to develop some pedagogical tools to aid in their understanding, and if these are successful, I then share them with colleagues at other institutions.
Because my classes have fewer than 15 students in them, I really get to know the students fairly well each semester, and therefore, I can give them more individual help (if they need it) than if I were teaching much larger classes. The atmosphere in the classroom is also less formal, which hopefully encourages the students to feel more relaxed and comfortable asking questions or putting forth their ideas during class.
Current Research Projects
This model was trained to match/predict the teams that are selected by the College Football Playoff (CFP) committee to compete for the NCAA National Championship in football. Each of the links for the weeks below as well as the links for pre-bowl related CFP standings (and post-bowl, poll-related comparisons) contain the quantities determined by the power rating system (as described in the book The Hidden Game of Football, by Carroll, Palmer and Thorn, 1988) that are used to make these predictions (by this model). The OD column represents the average difference between how many points a team’s offense has scored and how many points their defense has surrendered. The SOS column represents that strength of schedule that said team earned against its opponents that year. These quantities are determined using the full margin of victory (MOV) with all games played against Football Bowl Subdivision (FBS) opponents that year. The two columns ending in Z are these same quantities where the largest allowed MOV is 1 point, essentially capturing only the Win-Loss behavior of that particular season’s games. (Games against non-FBS opponents are included when MOV is ‘ignored’; all of those opponents are grouped together under one generic, team name.)
The multiplying weights used to derive the quantity in the Rating column, in all the files linked to below, are as follows: 0.30912775 * OD, 0.83784781 * SOS, 85.99451009 * ODZ, 49.28798644 * SOSZ and 0.44385664 * # of losses that year (where this last quantity is subtracted from the sum of the four other, aforementioned products). You can read more about this model in “An Accurate Linear Model for Predicting the
College Football Playoff Committee’s Selections”.
CFP Era: 2024 to the present …
In 2024, the number of invitations to the CFP was expanded from 4 to 12 teams, and so the ILM, which was trained on matching the top 4 invited teams, may not perform as well moving forward with regards to matching the CFP committee’s top 12 teams, etc. In hindsight, the ILM did correctly match 96 of the 108 teams that appear in the top 12 teams from 2014-2019, and from 2021-2023 (excluding 2020, when COVID impacted conference – and team – schedules) in those previous CFP pre-bowl/final rankings. An investigation into determining a possibly more accurate set of weights (with regards to matching the top twelve teams) has begun.
2024: Pre-Bowls, Post-Bowls
CFP Era: 2014 to 2023
The Improved Linear Model (ILM) had correctly selected 22 of the 24 top four teams chosen by the CFP committee during the first six years of the CFP: 2014-2019. (The power rating system, when ignoring MOV, correctly selected 20, and when using the full MOV, the same system correctly selected 16.) During 2020, with the pandemic impacting team’s schedules, etc., the ILP was still able to correctly identify 3 of the 4 teams that were invited to the CFP. In 2021 and 2022, the ILM agreed with the committee on all four teams that were invited to compete for the National Championship; however, the ILM included FSU in 2023 – so only three out of the four invited teams were correctly matched that year, raising its performance to correctly selecting 36 of the 40 teams chosen to compete in the College Football Playoff. (The power rating system, working with the full MOV, had 2 CFP selected teams appearing in its top four in 2020, 3 in 2021, and 2 in 2022, and just 1 in 2023 raising its correct predictions to 24. With regards to the power rating system when it ignores MOV, it correctly chose 3 of the 4 teams in 2020 and 2021; however, it had all four invited teams in 2022 AND in 2023 (excluding undefeated FSU!), raising its total to 34 – out of the selected 40 over the last 10 years.)
2023: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2022: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2021: Weeks #9, #10, #11, #12, #13 , Pre-Bowls, Post-Bowls
2020: Weeks #13, #14, #15, #16, Pre-Bowls, Post-Bowls (Caveats about 2020.)
2019: Weeks #10, #11, #12, #13, #14, Pre-Bowls, Post-Bowls
2018: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2017: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2016: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2015: Weeks #9, #10, #11, #12, #13, Pre-Bowls, Post-Bowls
2014: Weeks #10, #11, #12, #13, #14, #15, Pre-Bowls, Post-Bowls
BCS Era: 1998-2013 (16 years)
The Improved Linear Model matched 26 of the 32 top two teams chosen to compete for the NCAA National Championship (in football) during the 16 BCS years; the power rating system matched (a different set of) 19 teams (both with and without MOV).
2013: Pre-Bowls, Post-Bowls
2012: Pre-Bowls, Post-Bowls
2011: Pre-Bowls, Post-Bowls
2010: Pre-Bowls, Post-Bowls (35 post season bowl games in all this year)
2009: Pre-Bowls, Post-Bowls
2008: Pre-Bowls, Post-Bowls
2007: Pre-Bowls, Post-Bowls
2006: Pre-Bowls, Post-Bowls
2005: Pre-Bowls, Post-Bowls
2004: Pre-Bowls, Post-Bowls
2003: Pre-Bowls, Post-Bowls
2002: Pre-Bowls, Post-Bowls
2001: Pre-Bowls, Post-Bowls
2000: Pre-Bowls, Post-Bowls (25 bowl games this year)
1999: Pre-Bowls, Post-Bowls
1998: Pre-Bowls, Post-Bowls
Poll-Based Years: 1965 up to 1997
1997: Pre-Bowls, Post-Bowls
1996: Pre-Bowls, Post-Bowls
1995: Pre-Bowls, Post-Bowls
1994: Pre-Bowls, Post-Bowls
1993: Pre-Bowls, Post-Bowls
1992: Pre-Bowls, Post-Bowls
1991: Pre-Bowls, Post-Bowls
1990: Pre-Bowls, Post-Bowls (19 bowl games)
1989: Pre-Bowls, Post-Bowls
1988: Pre-Bowls, Post-Bowls
1987: Pre-Bowls, Post-Bowls
1986: Pre-Bowls, Post-Bowls
1985: Pre-Bowls, Post-Bowls
1984: Pre-Bowls, Post-Bowls
1983: Pre-Bowls, Post-Bowls
1982: Pre-Bowls, Post-Bowls
A major restructuring of which teams were designated as Division 1 occurred in 1982. Before that, the 8 college teams in the Ivy League, and another 17 colleges, were considered Division 1. Various teams were re-categorized between 1965 and 1981 (roughly another 16 teams), but that was usually only a few of teams in certain years.
1981: Pre-Bowls, Post-Bowls
1980: Pre-Bowls, Post-Bowls (15 bowl games)
1979: Pre-Bowls, Post-Bowls
1978: Pre-Bowls, Post-Bowls
1977: Pre-Bowls, Post-Bowls
1976: Pre-Bowls, Post-Bowls
1975: Pre-Bowls, Post-Bowls
1974: Pre-Bowls, Post-Bowls
1973: Pre-Bowls, Post-Bowls
1972: Pre-Bowls, Post-Bowls
1971: Pre-Bowls, Post-Bowls
1970: Pre-Bowls, Post-Bowls (11 bowl games)
1969: Pre-Bowls, Post-Bowls
1968: Pre-Bowls, Post-Bowls
1967: Pre-Bowls, Post-Bowls
1966: Pre-Bowls, Post-Bowls
1965: Pre-Bowls, Post-Bowls
1964: Pre-Bowls, Post-Bowls
1963: Pre-Bowls, Post-Bowls
1962: Pre-Bowls, Post-Bowls
1961: Pre-Bowls, Post-Bowls
1960: Pre-Bowls, Post-Bowls (9 bowl games)
1959: Pre-Bowls, Post-Bowls
1958: Pre-Bowls, Post-Bowls
1957: Pre-Bowls, Post-Bowls
1956: Pre-Bowls, Post-Bowls
1955: Pre-Bowls, Post-Bowls
1954: Pre-Bowls, Post-Bowls
1953: Pre-Bowls, Post-Bowls
1952: Pre-Bowls, Post-Bowls
1951: Pre-Bowls, Post-Bowls
1950: Pre-Bowls, Post-Bowls (8 bowl games)
Welcome to the NCAA Final Coaches’ Poll Modeling/Prediction.
The first two models in the table below will add the specified weight (according to how many wins a team achieves in the NCAA tournament – not including ‘play-in’ games) to the coaches’ poll total (the poll before the NCAA tournament begins), once the latter has been normalized into the range from zero to one. For instance, the two teams who reach the Final Four, but lose their next game, will have 1.5 added to their normalized, penultimate poll total when using the LN2 model’s weights.
The last three models below will add the specified weight to the full Tournament Selection Ratio (TSR) that is computed for each team, and is comprised of objective and subjective measures: the TSR uses the normalized AP and ESPN/USA Today (penultimate) polls (25% each), and, a trimmed Borda mean, utilizing eight computer-based systems (four which incorporate the full margin of victory, and four employing no margin of victory, i.e. only wins and losses matter), and this makes up the other 50% of the TSR. (More details about the TSR can be found in following paper: “Evaluating Regional Balance in the NCAA Men’s Basketball Tournament using the Tournament Selection Ratio”, Proceedings of the Fourth International Conference on Mathematics in Sport, June 5-7, 2013. Here is a PDF file which contains the tech report that provides more details about the origin of the models appearing on this page; this report was summarized at the 27th European Conference on Operational Research, which was held July 13th-15th, 2015.)
Starting at zero, LN2 adds increasing increments (0.1) to the preceding weight, to generate the next larger weight. ZPF and ZP2 rely on the Zipf distribution, where ZPF begins with 1/7, then adds 1/6, then 1/5, and so on, until adding in 1/1 for the tournament champion’s weight, whereas ZP2 begins with 1/8, and finally adds in 1/2 for the champion. PR2 is similar to the idea the Zipf distribution utilizes, but employs only prime numbers in the denominator, and 2 in the numerator. The first weight is 2/17, and then 2/13 is added to it, followed by 2/11, then 2/7, 2/5, 2/3 and finally 2/2 (like ZPF). Finally, 50T relies on the approach the next weight is roughly 50% larger than the previous one, beginning with 0.24.
# Wins 0 1 2 3 4 5 6
LN2 0.1000 0.3000 0.6000 1.0000 1.5000 2.1000 2.8000
ZP2 0.1250 0.2679 0.4345 0.6345 0.8845 1.2179 1.7179
ZPF 0.1429 0.3095 0.5095 0.7595 1.0929 1.5929 2.5929
PR2 0.1176 0.2715 0.4533 0.7390 1.1390 1.8057 2.8057
50T 0.2400 0.3600 0.5400 0.8100 1.2100 1.8100 2.7100
Average SCC values for the 26 years from 1993-2018.
Model SCC-15 SCC-25 SCC-35 Avg. Diff. (top 35)
ZP2 0.92997 0.95631 0.93903 2.1887
PR2 0.91144 0.94804 0.93603 2.2964
50T 0.91927 0.95091 0.93493 2.2192
LN2 0.92294 0.95031 0.94586 2.1107
ZPF 0.89550 0.94227 0.94531 2.2597
MCB 0.84777 0.85040 0.85481 3.4023
OCC 0.89858 0.93982 0.92142 2.7581
The article “How Predictable is the Overall Voting Pattern in the NCAA Men’s Basketball Post Tournament Poll?” appears in Chance (published by Springer-Verlag, under the supervision of the American Statistical Association), Volume 27, Issue 2, 2014, and describes more about how the MCB model was derived. (Here is a copy of that preprint for that article.)
The MCB model (Monte Carlo “Best”, for the best performing weights after running the simulation model with millions of random possible weight values) derived its coefficient values after evaluating a weighted, least squares regression model. The weights used to produce the predicted, final poll’s (vote) total are: 6.68507, for multiplying each team’s winning percentage (*100) by; 17.64763 was the factor associated with each team’s power rating; the number of NCAA tournament wins + 1 was multiplied by 88.24644; and (the number # of NIT tournament wins + 1) is divided by four before said multiplication (by 88.24644).
The OCC model was presented at the Sixth International Conference on Mathematics in Sport, June 26-28, 2017, and the paper entitled “Applying Occam’s Razor to the Prediction of the Final NCAA Men’s Basketball Poll” from that conference’s proceedings can be found here. Essentially, the OCC model will take each team’s rank (in the penultimate poll) and 1.05 will be added to that integer value, and then this will be divided by 2 raised to the number of NCAA tournament wins + 1. Finally, the teams will be sorted into ascending order, according to the values produced by the operation, and this will produce the predicted final poll after the tournament is done. (Teams that are not ranked before the tournament use a value of 67 before adding the 1.05, and NIT wins are essentially worth one fourth of an NCAA win.)
Research
Research Interests:
Simulation and predictive modeling; minimal perfect hashing functions; computer science education, concurrent programming using semaphores; Sidon set discovery; the MIPS architecture, analysis of algorithms’ and cryptography.
Below are pages that contain information concerning student research projects that I have advised or been involved with:
David Kronenberg (Summer 2009) – Investigating the Impact of Seed Value Choices for the K-Means Clustering Algorithm
Patrick Redmond (Summer 2009) – Studying an Exemplar-Based Approach to Cluster Determination
Andrew Bays (Summer 2004) – Parallel Search for Smallest K-element Sidon Sets in a Distributed System
Monique Willey (Summer 2001) – Adding Randomness to the Noisy Additive Cryptosystem (NAC)
Research Papers
Publications
“The NCAA CFP Committee Goes to Monte Carlo …”, Math Horizons, Volume 29, #2, November, 2021.
“An Accurate Linear Model for Predicting the College Football Playoff Committee’s Selections”, SMC Tech Report (SMC-2020-CS-001), 2020. PDF
“Objectively Modelling the College Football Playoff Committee’s Selections”, Proceedings of the 7th International Conference on Mathematics in Sport, 2019: published by the Institute of Mathematics and its Applications. PDF
”CS1 Programming Assignments That Can Help to Increase Awareness of Cybersecurity Issues”, Journal of Computing Sciences in Colleges, Volume 34, #2, December, 2018.
”Efficiently Searching for a Solution to a Kirkman Packing Design Problem”, Journal of Computing Sciences in Colleges, Volume 33, #2, December, 2017.
“Applying Occam’s Razor to the Prediction of the Final NCAA Men’s Basketball Poll”, Proceedings of the 6th International Conference on Mathematics in Sport, 2017: published by the Institute of Mathematics and its Applications. PDF
“Is it Possible to Objectively Generate the Rankings Produced by the College Football Playoff Committee”, SMC Tech Report (SMC-2016-CS-001), 2016. PDF
“Predicting the NCAA Men’s Postseason Basketball Poll More Accurately”, SMC Tech Report (SMC-2015-CS-001); 2015 (coauthored with SMC colleague Dr. Phil Yates). This was presented at EURO 2015, which is the premier European conference for Operational Research and Management Science. PDF
“Transactions: They’re Not Just For Banking Any More”, Journal of Computing Sciences in Colleges, Volume 30, #5, May, 2015.
“How Predictable is the Overall Voting Pattern in the NCAA Men’s Basketball Post Tournament Poll?”, Chance (published by Springer-Verlag, and supervised by the American Statistical Association), Volume 27, Issue 2, 2014 (coauthored with SMC colleague: Dr. Phil Yates). Preprint PDF
“Increasing Student Confidence throughout the Computer Science Curriculum”, Psychology of Programming Interest Group (PPIG), Work-in-Progress (WIP) Workshop, July 8-9, 2013.
“Evaluating Regional Balance in the NCAA Men’s Basketball Tournament using the Tournament Selection Ratio”, Proceedings of the 4th International Conference on Mathematics in Sport, June 5-7, 2013. PDF
“A Longitudinal Study of Regional Bracket Equality in the NCAA Men’s Basketball Tournament”, SMC Tech Report (SMC-2013-CS-001), 2013.
“Updated MPHF Weights for Ada 2012”, Ada Letters (published by the Association for Computing Machinery: ACM), Volume 32, #1, April 2012.
“Security Enhancements for the Additive Cryptosystem”, the Journal of Computing Sciences in Colleges, Volume 27, Issue 3, January, 2012.
“Rating/Ranking Teams through the (Spanning) Trees”, Proceedings of the 3rd International Conference on Mathematics in Sport, June, 2011. PDF
“Rating/Ranking Systems, Post-Season Bowl Games, and ‘The Spread’ “, Journal of Quantitative Analysis in Sports: Berkeley Electronic Press (BEP), Volume 6, Issue 3, Article 6, 2010.
“On k-minimum and m-minimum edge-magic injections of graphs”, Discrete Mathematics (Elsevier), January, 2010, volume 310, issue 1, pages 56-69. (Coauthored with Dr. John P. McSorley, Associate Professor of Mathematics at Southern Illinois University.)
“A Simple Encryption Strategy Based on Addition,” the Journal of Computing Sciences in Colleges, Volume 24, #6, June, 2009.
“Discovering More Properties of the Fibonacci Sequence”, Proceedings of the 15th Annual CCSC Central Plains regional conference, Volume 24, #5, May, 2009.
“A Discovery-based Capstone Experience”, Proceedings of the 14th Annual CCSC Central Plains regional conference, Volume 23, #4, April, 2008.
“An Effective Nonlinear Rewards-Based Ranking System”, Journal of Quantitative Analysis in Sports (JQAS): Berkeley Electronic Press (BEP), Volume 3, Issue 2, Article 3, 2007.
“Optimal Table Lookup for Reserved Words in Ada”, Ada Letters, Volume 26, #1, April 2006.
“Search for a Shortest K-Element Sidon Set in Parallel”, Proceedings of the 2005 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), June 2005 (with Andrew J. Bays who graduated in December, 2004).
“Can You Beat the Odds(makers)?”, Math Horizons (published by the Mathematical Association of America), April 2005.
“Overtake & Feedback Follow-Up”, Dr. Dobb’s Journal: Software Tools for the Professional Programmers (#360), May, 2004.
“Applying the Overtake & Feedback Algorithm”, Dr. Dobb’s Journal (#357), February, 2004.
“An Extended Programming Assignment (Using Java)”, PDPTA – 2001, June 2001.
“Arithmetical Croquet”, Proceedings of the Sixth Annual CCSC Northeastern Conference, Volume 16, #4, April 2001. (This was recognized as the third best paper by the conference’s committee.)
“March Mathness: An Analysis of a Nonstandard Basketball Pool”, Math Horizons, February, 2001 (with Aaron Archer, Richard Cleary, and Robin Lock).
“Comments on ‘Tagged Semaphores’”, ACM Operating Systems Review, Volume 34, #4, October 2000.
“Further Comments on ‘A Correct and Unrestrictive Implementation of General Semaphores’”, ACM Operating Systems Review, Volume 34, #3, July 2000 (with William E. Taylor – undergraduate coauthor).
“A Quantitative Examination of Computer Architecture Evolution”, Proceedings of the Fourth Annual CCSC Northeastern Conference, Volume 14, #4, May 1999.
“A Comparison of Three Strategies for Computing Letter Oriented, Minimal Perfect Hashing Functions”, ACM Special Interest Group on Programming Languages (SIGPLAN) Notices, Volume 30, #4, April 1995.
“Taxman Revisited”, ACM Special Interest Group on Computer Science Education (SIGCSE) Bulletin, Volume 26, #4, December 1994.
“A New Exercise in Concurrency”, ACM SIGCSE Bulletin, Volume 26, #3, September 1994.
“An Undergraduate Project to Compute Minimal Perfect Hashing Functions”, ACM SIGCSE Bulletin, Volume 24, #3, September 1992.
“Average Case Analysis When Merging Two Ordered Lists of Different Length”, ACM SIGCSE Bulletin, Volume 23, #3, September 1991.
“A Deterministic Prediction Model for the American Game of Football”, ACM Simuletter, Volume 19, #1, March 1988.
“NSCS H/MI Implementation Considerations”, Bell Laboratories Technical Memorandum 82-59473-11, October 1, 1982 (with James P. Jenal).
“A Performance Evaluation of NSCS (1NS1)”, Bell Laboratories Memorandum for File, 59473-820805.01MF, August 5, 1982 (with Arthur T. Sullivan).
Interview
My Saint Michael’s:
I appreciate the willingness of students at Saint Michael’s to work hard in classes that challenge their abilities, the esprit de corps with their fellow classmates, their overall appreciation for taking advantage of the significant opportunity of being enrolled in an institution of higher learning for four years, their respect for the faculty, and their investment in becoming intellectually enriched individuals.
I especially enjoy teaching the two course sequence that our Computer Science majors typically take during their junior year (Operating Systems in the fall, and Computer Architecture in the spring). This is mainly due to the fact that after establishing a firm foundation concerning the ideas behind developing software in their first four computer science courses, these two courses really allow for a very detailed study of how Operating Systems work, and what the hardware must do to execute software efficiently. I have recently begun teaching a course in cryptography and computer security, and have found this to be very interesting as well.
In the Computer Science Department, we really get to know our students, and vice versa. If a student gets excited about a specific topic in a class, that student has the opportunity to do some research with that professor under the “CS411 umbrella” in a later semester. Our Linux lab also provides our majors/minors with experience using another platform (besides the MS Windows operating system.)
Life Off Campus:
Hiking, cross country skiing, and biking are some of the ways I stay active when I’m not on campus. I also enjoy watching (and predicting) sport events; reading science fiction; and playing board games like Scrabble, Bohnanza, Rail Baron, Chess, and Backgammon.
Recent News
John Trono of the computer science faculty had a publication in late 2021. Here’s the citation: “The NCAA CFP Committee Goes to Monte Carlo …”, Math Horizons, Volume 29, #2, November, 2021.
(posted July 2022)
John A. Trono, chair and professor of computer science, was one of six panelists who shared their experiences on “The Philosophies of CS1” at the recent Midwest regional Consortium for Computing Sciences in Colleges (CCSC) hybrid conference. This was physically located in Fort Wayne, Indiana, held on October 1 and 2 – with many individuals attending these sessions via Zoom. John was one of four panelists who participated on the panel remotely for this hybrid conference; the five other panelists also were computer science faculty, from Bethel and DePauw (both in Indiana), Denison (Ohio), Penn State, and the University of the Pacific (in California). John also had his paper “The NCAA CFP Committee Goes to Monte Carlo …” accepted for the November 2021 issue of Math Horizons. This article summarizes improvements that John has recently devised which enhance the applicability of — as well as improving several different measures of accuracy for — his previous modeling approach that attempts to predict who this committee will select each year; that earlier model was presented at the 7th International Conference on Mathematics in Sport (in Athens, Greece) during July of 2019.
(posted February 2022)
John Trono, professor of computer science, was one of 128 US delegates participating in the 30th annual European Conference on Operational Research (aka EURO 2019) this past summer. This conference offered over 2000-plus presentations (only one talk per attendee) during a three-day period – as hosted by UCD (the University College of Dublin, Ireland) from June 24-26. John was invited to be a session chair on “Uncertainty” after his abstract submission was accepted; he gave his 20-minute presentation during two sessions out of the 51 concurrent tracks scheduled during each of those three conference days.) Over 60 countries were represented (Germany provided the largest contingent, comprising roughly 10 percent of the conference attendees) and many diverse topics and subject areas were discussed throughout this conference. John also had dinner with Saint Michael’s alumna Catherine (Catie) Corrigan (’17), that same Tuesday that John made his presentation, on the UCD campus. After graduating from St. Mike’s with a B. S. in Mathematics, Catie completed a master’s in Risk Management and Insurance in Limerick, Ireland. She is now an Associate Consultant at Version 1 in Dublin where she works as part of the Business Systems team. After the completion of EURO 2019, John then departed for the 7th Annual MathSport International Conference, which was held in Athens, Greece – during the first three days of July, 2019 as well – where he presented his paper (on a different topic from his presentation at EURO 2019): “Objectively Modelling the College Football Playoff Committee’s Selections.”
(posted February 2020)
John A. Trono, professor of computer science, presented his paper entitled “Efficiently Searching for a Solution to a Kirkman Packing Design Problem,” at the 26th Annual Consortium for Computing Sciences in Colleges (CCSC) Rocky Mountain conference, which was held in Orem, Utah from October 13-14, 2017.
(posted December 2017)
John A. Trono, professor of computer science, will be presenting his paper “Applying Occam’s Razor to the Prediction of the Final NCAA Men’s Basketball Poll” at the Sixth MathSport International Conference, from June 26-28, 2017 as hosted by the University of Padua (Italy).
(Posted June 2017)
John A. Trono, professor of computer science, gave an invited talk on March 18, 2015, at Middlebury College, presenting his “Reasonably Secure Cryptosystem Based on Addition.” He also presented his paper “Transactions: They’re Not Just For Banking Any More” at the Central Plains regional conference of the Consortium for Computer Science in Colleges (CCSC-CP) which was held on April 10-11 in Branson, Missouri.
(Posted April 2015)