top of page

Search Results

226 items found for ""

  • Introducing CANA’s New Website

    Connecting to you is important to us. To improve how we do this - providing you information and getting your feedback - we developed a new Website. We feel that this site is more modern, aesthetic, mobile friendly, and sensible. Let us know what you think. CANA Connection is a great place for you to connect with us. We consider you as part of our extended community of clients, partners and otherwise collaborators. This is the place where you will find our latest news and ideas - where you are able to interact with our team of professionals on current topics that are relevant to you. You will also find an event calendar to stay up to date on our upcoming activity. To complement our new site, you will also see a jump in our social media sites (Google+, FaceBook, Linked In, and Twitter). Click the social media icons below to engage us on these sites too. So, welcome to our new site! Take advantage of our CANA Connection site and, by all means, provide us feedback. Let us know how you would like to engage. What kind of information would you like to see on our new site? #new #website #CANAAdvisors #CANA #connection #community #blog #launch

  • Another CANA Welcome!

    We are proud to announce the following newest members to the CANA team. Each brings their own unique set of skills as well as deep expertise and experience to our Logistics & Analytics services. Please welcome Jackie Knapp, Aaron Luprek, and John Moore to CANA! Jackie Knapp is a career business analyst who brings decades of experience providing in-depth business analyses and financial planning support to both commercial and government projects. Jackie joins CANA as a Business Analyst. [LinkedIN profile] Aaron Luprek is a talented software developer who brings immediate expertise in multiple software languages. He, too, boasts experience working with both commercial and government clients. Aaron joins CANA as a Senior Software Developer. [LinkedIN profile] John Moore brings a wealth of expertise in project management, operations research, change management, business process re-engineering and performance management & measurement to the CANA Team. Most recently, he has dedicated his expert services to multiple DoD clients to include the USMC and the U.S. Army. He joins CANA as a Senior Operations Research Analyst. [LinkedIN profile] Welcome aboard! #Jackie #Aaron #JOhn #businessanalyst #softwaredeveloper #researchanalyst #CANA #CANAAdvisors #team

  • Congratulations on the 50th Anniversary of MORS!

    As fellow MORSians, CANA understands the impact and value that MORS members have provided through continued volunteering - a similar principle that CANA also supports through purposeful give-back. With decades of combined military service and application of operations research methods to military and commercial logistics problems, CANA appreciates the value of MORS and recognizes the positive impact that the MORS community has had on important National Security decisions. For more information on MORS and the MORS community go to www.mors.org. #MORS #Phalanx #CANAAdvisors #research #operationsresearch #nationalsecurity

  • A CANA Leader - The New President of MORS, Norm Reitter

    CANA would like to congratulate our Director of Analytics, Mr. Norman Reitter, as the new president of the Military Operations Research Society (MORS). This is a significant achievement and adds to Norm’s distinguished career in National Security Analysis. In becoming President of MORS, he joins a distinguished cadre of past presidents including Wayne Hughes, Gregory Parnell, and his Master’s Advisor, Dr. David Schrady. Norm has served as a Director of MORS since 2011. Prior to becoming President-Elect in 2015, he served as Vice President, Financial Management. You can read about Norm’s plans for his year as President as well as the rest of the Society’s activities here. #CANA #MORS #president #award #phalanx #society #NormanReitter

  • Coding with 'PIPES'

    This is not a pipe I, like the rest of our team, frequently use the R language for statistical analysis for various projects. One really cool feature of R is that it has a vibrant user community and contributors. I was working on some analysis last week and saw an example using the ‘pipe’ operator %>% along with a lot of ‘buzz’ on sites like Stack Overflow and R Bloggers. I have to admit at first, I was resistant to a new package and functions, and I simply didn’t ‘get it’. Still, the magritttr package seemed to be changing the way people wrote functions in R. I tried it – mostly to rebut my colleagues who had recommended it. After a few minutes of stumbling around, and much to my shock and amazement, have concluded that they were right! What does the ‘pipe’ operator do? The pipe operator does a (deceptively) simple thing. It takes whatever is on the left hand side of the operator, and ‘pipes’ it to the first argument to the right hand side. So, x %>% f() = f(x). The simplest example I can think of is the following: library(magrittr) 3 %>% + 2 [1] 5 Big deal, right? Actually, it is a big deal, because you can chain them together! Consider this: library(magrittr) data(cars) cars %>% subset(speed<50) %>% subset(dist > 10) %>% plot() Reading from left to right, it is completely obvious even to a non-programmer what was done. And the best part? The ‘cars’ object remains pristine, and no accessory datasets were stored in the process! The Name Magrittr is a nod to the Belgian painter Rene Margritte, and his painting ‘The Treachery of Images’ (The French at the bottom translates to ‘this is not a pipe’). #magritttr #Rlanguage #analysis #PIPE #pipes #functions #statisticalanalysis #statistics

  • Professional Certifications – an Investment well worth the Effort

    The defining characteristic of a professional is the constant drive to be as fully proficient as possible in their chosen endeavor. One standardized way of measuring proficiency in Operations Research and several other fields – notably computer science – are certifications. A Certification is a voluntary credential that professionals acquire to demonstrate their competence in the community. This is distinctly different than Licensure, which are legally mandated to perform a certain type of work. If certifications are voluntary, why pursue them? To my mind, there are three reasons: First, to demonstrate your competence to your colleagues (and potential employer) Second, to demonstrate your competence to yourself Third, as a goal for self-study. I’ll address each of these in turn, and then speak to the three professional certifications I currently hold: Certified Analytics Professional (CAP, INFORMS), Accredited Professional Statistician, (PStat, American Statistical Association), Chartered Statistician (CStat, Royal Statistical Society). For professionals in the fields of Operations Research and Statistics, the need to demonstrate competence has never been greater, and was brought into sharp focus during my recent transition from the US Navy. I never needed to demonstrate my competence via certification inside Navy, because the community is small, we all knew each other, and we all routinely saw examples of each other’s work product. Approaching transition, I realized that the field of people currently claiming membership in the OR “Tribe” was far greater than the true size of our profession. The normal means to overcome this difficulty, showing a portfolio of recent relevant work, was not an option because all of the work I did was privileged (a common problem in our field). Certification from INFORMS was a great way to demonstrate professional competence without sharing work examples. People take certifications for a number of reasons; I will expound on mine below. The common reasons are because it is mandated by an employer or contract, because the individual wants to demonstrate to others that they have a given level of competence, or that they want to demonstrate to themselves that they are competent. As you will see below, my experience is a mixture of desiring to demonstrate my competence to myself, and some good natured, but pointed, encouragement from my mentors in the broader community. It probably doesn’t matter why a person undertakes a course of action if it is, in the long run, good for both themselves and the Profession. In short, I took the CAP exam for three reasons; first, because I write a recurring article for INFORMS/Analtyics Magazine, and felt that it may be conspicuous to be a regular contributor without having the certification. Second, my family happened to be out of town the week that the test was offered and I had nothing better to do. Finally, I had dinner with a colleague who was a CAP and he essentially told me to ‘man up’ and do it. I took the PStat certification for two reasons; first, I have been privately concerned for the past few years that there may be negative growth in Analytics in the future. How can I make such a heretical comment? As was mentioned in the keynote address of the 2016 INFORMS/Analytics conference in April of this year, businesses have put a lot of resources against “Analytics” and do not uniformly feel that they have received the expected return. (note: In my own practice I am hyper-attuned to this sentiment.) I felt that it was important to have a second qualification as a hedge against an uncertain future. However, the real reason was that I had lunch with a colleague who was a PStat and he essentially told me to ‘man up’ and do it. Yes, it is the same colleague from the preceding paragraph. I took the CStat certification for two reasons, and these are not nearly as satisfying as the preceding two. First, I saw advertised in Significance that for a limited time, the Royal Society would automatically confer CStat status to any PStat holder that applied. This is not simply ‘certificate collecting’ but adds my name to a third professional registry, which may prove useful should I ever desire to do business in the United Kingdom. The real reason, however, is so that I can have a meal with my colleague from the last two paragraphs and it will be my turn to do the goading! The Process I will now briefly discuss the mechanics of application for CAP and PStat: CAP. The Certified Analytics Professional currently consists of a verification of education and experience, verification of soft skills, ethics pledge, and written examination. The best reference for exam preparation is the CAP Website. For those looking to take the exam, I will make two comments: It is a ‘breadth’ of knowledge exam, not a ‘depth’ exam. There were areas of the exam that I had no education or previous contact with. This is a common experience. The best way to prepare using the materials from the INFORMS website is to understand not only why the right answers are ‘right’ but also why the wrong answers are ‘incorrect’. PStat. The Accredited Professional Statistician is different in that there is no written exam. Petitioners must demonstrate professionalism in several areas to include: education, practice, commitment to continued education and ethics pledge. I found this to be more daunting than the CAP process. There is an interesting aspect to PStat that I did not know before sitting in a 15 June webinar by ASA’s Executive Director – applicants who fail to achieve PStat standing are mentored by ASA – given career advice, if you will, as to what personal and professional milestones are next in their careers. CStat. Because of the conferral agreement, for me this consisted of submitting my credentials from ASA for review by the Royal Society. “Junior” Qualifications Both INFORMS and ASA offer lower level qualifications – the “Associate Certified Analytics Professional” and the Graduate Statistician (GStat). These are meant to serve as ‘stepping stones’ for early career professionals enroute to eventual full certification. I’ve covered the external reasons in this note, and would now like to turn to the internal, personal reasons for seeking certifications. Each one for me has led to a personal reckoning of the state of my career, and what ‘next things’ I should be doing – not just in terms of Practice, but also Scholarship and Service. The satisfaction of having attained certification and (re)affirming your commitment to the Profession and the Profession’s recognition of your efforts is immense, and well worth the price of admission, both in terms of money and ‘sweat equity’. Article by Harrison Schramm, Principal Operations Research Analyst, CANA Advisors #certifications #professional #licensure #INFORMS #CAP #PStat #CStat #Accredited #analytics #ASA #GStat #qualifications #sweatequity

  • INFORMS OR/MS Today Articles - Two CANA Analysts Have Articles Published in June 2016 INFORMS OR/MS

    The June 2016 INFORMS OR/MS Today contained two articles authored by CANA analysts. OR/MS Today is a bimonthly publication covering a broad range of operations research subjects. It has a readership of over ten thousand. Our very own Harrison Schramm’s article coauthored by Scott Nestler and covers why understanding p-values are important. < Link to INFORMS Article > In addition, CANA's Walt DeGrange was coauthored with Gary Cokins, Stephen Chambal, and Russell Walker. In an article that presents sports analytics and the development of a new sports analytics taxonomy. It also explains why an analytics taxonomy is important for the growth of the sports analytics field commercially and academically. < Link to INFORMS Article > CANA Advisors is proud to have thought leaders like Harrison and Walt. #statisticalanalysis #pvaules #sports #analytics #INFORMS #WaltDeGrange #GaryCokins #StephenChambal #RussellWalker

  • Document Preparation... in R?

    R Markdown I've gone a little bit off the 'deep end' when it comes to programming in R recently. I was turned on to Markdown a few months ago by a friend, but haven't really had time to play with it. Until now. The builders of Markdown at RStudio have a technical explanation of Markdown; but to me, it is: A tool that brings word processing functions to a level playing field with the analysis and graphics already resident in R. Ok, sounds cool - why would anyone want that? When I start an analysis project, I'm a madman (sorry, colleagues). I write code like a fiend and have datasets with names like: TF, TFNZ, TFD, TFD2, TFDD, TotalMassRetain, SiberianKhatru, and so on. At some point, the mania of the beginning of the project settles down, and there are three things I have to do. Tasks: 1. Write Stable, usable code. Note: 'Usable' means 'Usable by someone other than me' 2. Write a report 3. Produce professional graphics, export, save, reformat, import, resize, etc. This is how using Markdown compares with my previous production method. Three disparate tasks are now combined into one, consolidated task. Markdown puts the code in the document. It's hard to think of a better, more stable way to save code. Chunks of code are inside the document (and can be easily cross-ported using the <- To Console function of the History window in RStudio). This is especially useful when importing graphics, because they are automatically created and rendered in the destination format. Markdown can render (or knit) to Word, HTML, or LaTeX. And, if the data changes, the report changes as well. A very quick drive The RStudio example uses the mtcars dataset that comes standard in R::Base. Here, I'm going to play with the 'Old Faithful' dataset, which can be accessed by the command data ("faithful"). We can compute a summary of the data in the following chunk of code: {R tableEX, echo = FALSE} faithful %>% summary() %>% kable() We can also make nice looking graphs using ggplot2: Eruptions and Waiting Time for Old Faithful We can even do linear regression inside Markdown: Two final thoughts 1. If wanted to repeat analysis with different data, I would only have to change one line of code. 2. This document was written in R Markdown. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com. #RMarkdown #RStudio #dataset #documentpreparation #analysis

  • CANA at SportCon!

    CANA welcomes Jesse McNulty to the sports analytics team. Jesse is the former Director of Analytics for the Atlanta Blaze in Major League Lacrosse. After partnering with Jesse on numerous analytics projects over the past few years, he now has the opportunity through CANA to spread his expertise in sports analytics to multiple lacrosse teams and other sports. Jesse recently presented his work at SportCon and the wrote this informational post on the conference. The worlds of analytics and statistical analysis in sport are increasingly merging. This fact was celebrated by MinneAnalytics this past Friday as they hosted the second-annual SportCon event, located at Optum Headquarters in Eden Prairie, Minnesota. The wide-reaching conference, which featured an exhaustive array of sport teams executives, experts, hobbyists, and entrepreneurs from every corner of sports and technology offered plenty for all attendees to enjoy. Highlighted events in the morning sessions included the baseball panel featuring Jeremy Raadt and Daniel Adler from the Minnesota Twins along with Ty McDevitt and Patrick Casey of the University of Minnesota baseball team. During this session, the panelists provided their thoughts on the use, support structures, and coaching best practices for maximizing objective analysis in the game. Next featured a presentation from Prof. Rodney Paul of Syracuse University who provided economic analysis to attendance data in Canadian Major Junior Hockey. Prof. Paul evaluated league attendance for the Ontario Hockey League, Quebec Major Junior Hockey League, and Western Hockey League across some variables ranging from winning percentage, weather, rostered prospects, and many more. To round out the morning session, I presented on Analysis in Major League Lacrosse. This presentation provided a focus on the "version 2.0", expanding upon the work being done over the last 24-months and discussed at the inaugural SportCon the previous year. The presentation was well-received and insightful questions from attendees on opportunities for growth, data collection, team structures, and future marketing opportunities were asked and discussed. In addition to panels, speakers, start-up company showcases, and presentations, the SportCon event also included a tasty lunch, a midday mascot rampage from the Timberwolves, Wild, Vikings, and the independent league baseball team, the St. Paul Saints. Also, a series of professionals showed off drone racing, and members of the Drone Racing League (DRL) flew drones in, around, and over lunch attendees. The lunch session provided an opportunity for plenty of networking opportunities. I was seated with Nick Restifo of the Minnesota Timberwolves, Seth Partnow of the Milwaukee Bucks, and Prof. Zakary Mayo of St. Mary's University of Minnesota. Much of our conversation was centered around thoughts and reflections from the morning session on Analytics in Major League Lacrosse. Debates and insights around the topics of draft analysis, player performance monitoring, and league-wide data support was had and provided a variety of thought experiments into our present collection process, marketing of league data and materials, and future considerations with the collection of in-game events. A highlighted session from this afternoon featured Prof. Tyler Bosch of the University of Minnesota and Dexalytics on monitoring and individualization of performance data in college athletes. Dr. Bosch focused much of his talk on providing frameworks for understanding differences in training responses. Perhaps in the coming years, integrating analytics into the collegiate ranks through the lens of human performance can help close the gap on a variety of focuses within science and sports medicine. The 2018 SportCon event was well organized and serves to illuminate further research in the science and statistical analysis of sport. Also, SportCon hopes to elevate the burgeoning world of sports tech. Walt DeGrange is a Principal Operations Research Analyst at CANA Advisors. To read more on Sports Analytics and article by other members of the CANA Team visit the CANA Blog. #sports #sportsanalytics #SportsCon #JesseMcNulty #WaltDeGrange #MinneAnalytics #Analytics #Dexalytics

  • Projecting the Incidence of Blindness in Monterey

    Purpose CANA took on a pro-bono project supporting the Blind and Visually Impaired Center of Monterey County, a not-for-profit organization serving individuals on the peninsula. The Monterey Center needed to project the future population of the visually impaired in their areas of interest - Monterey, Salinas, Carmel, and Carmel Valley, California - to better position their limited resources to the population. To approach this problem, we took a two-step approach using existing data to determine the risk of vision loss as a function of location, age, and sex and applying those factors to the projected demographics of communities on the Central Coast. The main source of data for this project is the American Fact Finder, created by the U.S. Census Bureau, specifically table C18103 "Sex by Age by Vision Difficulty." Figure 1: Screenshot of the American Fact Finder interface, captured 17 October 2016 Summary data from the years 2008-2015 is captured as separate Microsoft Excel files from the web page. These files are then processed and brought together as summary data for predictions. This somewhat mundane task is aided by a set of automated routines built in the R. Specifically, we captured each spreadsheet in a list and then iterated over the list to extract the by-year columns. In addition to the State of California, we also collected data for Monterey County, San Francisco County, the City of Salinas, and the San Jose metro area. This dataset lacks some elements we would like to include, such as veteran status, income, and ethnicity. However, we feel that these variables are controlled for by the metropolitan area sufficient for the purposes of this project. Figure 2: Map of the Central California Region We then turn to the first task, which is analyzing the loss of vision as persons age by location and sex. While we compiled this information for each area, we present the results for Monterey County only. Figure 3: Boxplot of Proportion of Visually Impaired population by Sex and Age The boxplot of risk, showing the average incidence of Visual Impairment (solid line) variability per year (box), is the most interesting artifact. Projecting the future impaired population of Pacific Grove, CA Pacific Grove, California (zip code 93950), is a small city. It is unique because there is, for all practical purposes, no undeveloped land, and the local government actively works to keep the city size stable. For our purposes, we consider the city's population to be fixed. From our previous analysis, we determined that the percentage of the Visually Impaired population by age is: under 18, .6%, 18-64, 1.1%, and over 64, 6.2%. Sex is not a significant determinant in visual impairment for this population. The demographics of Pacific Grove are below: Based on the information determined above, we estimate that in 2020, the Visually Impaired population of Pacific Grove, California will be approximately 335 persons. This is slightly higher than our current estimate of 315 in 2015. Discussion There are two factors that contribute to the very low (less than 1%) growth in visual impairment in Pacific Grove. These are 1. No population growth in the city. There are no undeveloped areas in Pacific Grove; the current population number will almost certainly remain constant. By way of comparison, Pacific Grove had negative population growth between 2000 and 2010. 2. No growth in the over 64 population. The population greatest at risk, those over 64 years old, are unlikely to see strong growth in the near- to mid-term. This is because the city already has a substantial older population. See a comparison of Pacific Grove demographics with Salinas City, below. Conclusion and Next Steps This work took a fast look at predicting the incidence of Visual Impairment using census data. This work did not account for differences in education, work experience, or veteran's status with respect to loss of sight. We were surprised to discover how uniform the rates of visual impairment are across the populations of Central California; in the future, we may consider how California compares with other States and/or Countries. It would be worthwhile to consider the data that various agencies may have as part of their records. This work only considered the incidence of Visual Impairment but did not consider the causes or different treatments/services required by that population. Acknowledgments This work was supported in part by a grant from the CANA Foundation. #CANAFoundation #AmericanFactFinder #Census #blind #visuallyimpaired #outreach #Incidence #Monterey #demographics #dataset #California #Salinas #PacificGrove

  • Here is to a Great Year Ahead

    CANA would like to thank you all for a great 2016 and wishes you all the very best for the new year. Let's make 2017 fantastic! #CANA #CANAFoundation #family #newyear #team

  • Notes from FiveThirtyEight Talk on Telling Stories

    “This is the best talk I’ve attended in over a year.”- Harrison Schramm You may know Harrison Schramm from his “5 Minute Analyst” articles and blog posts, and when he isn’t thinking of the cost of the Death Star or solving the logistics problems of Harry Potter, he also is one of CANA Advisors’ Principal Operations Research Analysts. Recently he had the opportunity to go to a FiveThirtyEight Talk on Telling Stories (at the RStudio::conf ). In his words, Harrison said, “[t]his is the best talk I’ve attended in over a year.” In a change of pace from writing a blog post or article on the talk, we asked Harrison if he would share his notes on the event, and he was kind enough to pass them along. We hope these notes spark your interest in not just the ‘how’ but the ‘why’ of statistical analysis. ****From the Event Notebook of Harrison Schramm**** Data Journalism Principles: Story leads data follows use rigorous but interminable methods: Be accurate, Be fast, and Be transparent. Useful tools for R. tidyverse is the tool of choice for data. (The tidyverse is a set of packages that work in harmony because they share common data representations and API design. https://blog.rstudio.org/2016/09/15/tidyverse-1-0-0/) In the interest of transparency, FiveThityEight has created an R package. (Nate Silver’s FiveThirtyEight uses statistical analysis — hard numbers — to tell compelling stories about politics, sports, science, economics and culture. https://github.com/fivethirtyeight). For example, if you would like to see a breakdown of Avengers Characters by longevity and gender, you can do the following: Install.packages(“fivethirtyeight”) Library(ggplot2); library(magrittr); library(“fivethirtyeight”) avengers %>% ggplot(aes(factor(death1), years_since_joining)) + geom_violin() + facet_wrap(~gender) + xlab("Currently Living?") + ylab("Years Since Joining") + ggtitle("Avengers Characters Violin Plot - Status vs. Years") The Six Types of Data Stories Novelty Outlier Archetype Trend Debunking Forecast Novelty Data Story: Basic questions are first. New Data Story danger: Triviality Remedy: Simple Summaries Ask yourself: Is this data meaningful to others? Outlier Stories Danger: Spurious Result Tactic: Characters - talk about who the outlier is: who is it, what company is it, etc. Profile one of the characters from the outlier group, then introduce the statistics Ask yourself: Is this really so different? Archetype Stories Danger: Oversimplification Tactic: Modeling Ask Yourself: What Variables am I leaving out? Trend Trends: Terrorism overall declining in the EU, but religiously inspired attacks rising. Done using dplyr, data %>% group_by %>% summarize %>% ggplot Danger: Variance - regression to the mean Tactic: Be Conservative Ask yourself: Is this signal or noise? Fun Quote: If you can always tell a valid trend, you should be trading on wall street, not telling data stories Debunking Bechdel test: Examines how women are portrayed in movies. 1. Are there 2 or more women, 2. Do they talk to each other, 3. Do they talk to each other about something other than men? Danger: Confirmation Bias - your own belief in the debunking action. Tactic: Showcase Failures Ask Yourself: How much do I want to debunk this? Quote about p-hacking: Warning: This is evil (statistical) work. Do not go to the dark side. Do not try this at home. Note: You can read Harrison’s piece on P-hacking appearing in OR/MS Today here: https://www.informs.org/ORMS-Today/Public-Articles/June-Volume-43-Number-3/P-value-Primer-P-OR-P-values-in-operations-research-M-N-O-P-Q-R-S-T Example of p-hacking: Eating potato chips leads to higher SAT Math scores. Forecast (You work a narrow path here) Danger: Overfitting Tactic: Simulations and scenarios Ask Yourself: Am I properly conveying the uncertainty in my model? We hope these notes from Harrison Schramm on R and how to use it to tell a story with your statistical and analytical data is useful. Follow Harrison (@5MinuteAnalyst on twitter) and the rest of the CANA Advisors’ Team (@CANAADVISORS on Facebook and twitter) for more insight, blog posts and articles devolving into data, logistics and analytics in creative and helpful ways. Other interesting CANA Articles on R: Blog Article: Document Preparation... in R? http://www.canallc.com/single-post/2016/09/02/Document-Preparation-in-R Blog Article: Notes on The Seven Pillars of Statistical Wisdom http://www.canallc.com/single-post/2016/09/16/Notes-on-The-Seven-Pillars-of-Statistical-Wisdom #stories #R #5MinuteAnalyst #FiveThirtyEight #Rmarkdown #RStudio #bigdata #datatype #datastory #CANA #tidyverse

bottom of page