Login Form

 

Link to Official EU IAI Annual Meeting Website.

 

4th Conference of the European Division of the International Association for Identification  

June 13-14, 2019 at SOUND GARDEN HOTEL, Warsaw, Poland 


 WE WOULD LIKE TO THANKS

OUR CO-ORGANIZER

WELCOME SPEECH

 

DEPUTY DIRECTOR

KRZYSZTOF BORKOWSKI

Bio

Krzysztof Borkowski, born in 1963, is a graduate of Faculty of Physics, University of Warsaw, and Postgraduate Studies for Forensic Experts in Szczytno (1991-1992). In 1991, he joined Central Forensic Laboratory of the Police (CFLP) as footwear examiner with forensic expert status. He is one of the authors of methodology devoted to forensic footwear examination as well proficiency tests by interlaboratory comparisons for Regional Forensic Police Laboratories. Since 2008, he has been a member of the Steering Committee of ENFSI Marks Working Group. He is also a co-author of interlaboratory comparisons in the scope of shoewear examination. In 2012, Borkowski obtained his PhD degree in law at the Institute of Criminal Law, Faculty of Law and Administration, University of Warsaw. His PhD thesis, devoted to forensic identification of footwear, was awarded in Tadeusz Hanausek XIV Competition as the thesis of the year. On 1 August 2013, he was appointed CFLP Scientific Projects Manager. He is an active member of a team appointed under the Scientific-Technical Council working under the auspices of Ministry of the Interior, responsible for drafting strategic programs for the Ministry. Since 2013, he has been a practicing academic teacher at National Defence University. Borkowski is the author of several forensic publication, books, and an experience speaker at various international conferences. Since 2014, Head of CFLP Fingerprint Examination Department; on 5 June 2017 Krzysztof Borkowski was appointed Deputy Director of the Central Forensic Laboratory of the Police.

 

KEYNOTE SPEAKER

 

 

  

CEDRIC NEUMANN 

 

 

 

A new perspective on the analysis of data collected during error rates experiments in forensic science

 

 

Several groups, in particular in the U.S. (e.g., PCAST, NRC, NIST-OSAC, NIJ), have called for the determination of the “error rates” of different forensic sub-disciplines. Estimating error rates in forensic science is not an easy task. Many discussions are taking place on how to administer error rates experiments in the most unbiasing way, to a sufficiently large number of scientists, with a sufficiently large number of test cases and that can account for important factors affecting the examination process.

 

  

Unfortunately, experiments in the forensic context cannot be run in the same sheltered way as they are in plant or animal science, or in industry. Due to budget and time constraints, experiments in the forensic context are necessarily relying on practicing scientists who gracefully donate personal time to support research. Consequently, data collected during these experiments is often messy, unbalanced and incomplete and its analysis is an often overlooked challenge; not to mention that interpreting the results of these analyses is far from being intuitive (e.g., what is a “confidence interval”, and why does the PCAST report only uses a “one-sided confidence interval”?).

  

During this talk, we will explore the difficulties associated with the analysis of data collected during error rates experiments. We will describe a method for analysing messy and incomplete data that actually answers the question at hand (i.e., assigning values to error rates under the uncertainty associated with the experiment). We will apply this method to two well-known experiments aimed at quantifying the error rates in fingerprint examination: the NIJ-funded experiment conducted by the Miami-Dade Police Department (MDPD), and the FBI-driven experiment conducted by Noblis (aka the “black-box study”). Our method allows us to look at this data under a new light, address some of the controversy surrounding these studies (principally the MDPD study) and draw some conclusions that have previously escaped the initial researchers. No statistical background is required to enjoy this talk.  

 

 

Workshop (3 hours) – Decision-making in forensic science: what information is needed, which conclusions are supported.

 

The inference of the source of a given trace is the key question addressed by most forensic scientists (at least in the pattern and trace sub-disciplines). The inference process includes many different phases, which, in turn, require different types of information. Fortunately, the decision-making process used in forensic science is no different than the one used to make decisions at every instant of our lives; thus, this process is, at least unconsciously, very familiar to all of us. During this workshop, we will explore, using a series of examples and exercises rooted in our daily lives, the structure of the decision-making process that we use to reach conclusions in forensic science. We will explore how the different types of conclusions that are commonly encountered in forensic science relate to different phases of the decision-making process, and we will discuss how our (in)ability to use certain pieces of information during the process (e.g., because of the possibility of bias) affect the types of conclusions that can be logically supported. No statistical background is required to benefit from this workshop.

  

Talk – Defence Against the Modern Arts: The Curse of Statistics1

After decades of publications, conferences, debates and research, there is an exponentially-growing agreement in the forensic community that conclusions should be supported by data. At the core of this new approach lies mathematics, and more specifically statistics and probability theory. Data enables stronger, more valid, inferences, and more transparent conclusions. Whether these conclusions are supported by error rates or are reached through the use of a likelihood ratio is not the concern of this talk. However, with great power comes great responsibilities: the use of statistical and probabilistic concepts to interpret data may give a varnish of legitimacy to poor data, weak understanding of scientific issues, or flawed methodology.                              

In this talk, we will review three results, based on data and involving the use of statistics and probability theory, that are advocated to support forensic conclusions. The first result involves the interpretation of so-called black-box studies to quantify the error rates of fingerprint examinations. During this talk, we will explore the flawed PCAST interpretation of the Miami-Dade Police Department study and discuss how the misuse of statistics led to the 1 in 18 false positive error rate claimed in the PCAST report. The second and third results involve two different attempts to quantify the weight of forensic evidence using calculations that have been given the appearance of the Graal: the likelihood ratio. During this talk, we will expose these two widely advocated approaches (one in the U.S. and one in the E.U.) and discuss how the misuse of statistics and probability theory leads to algorithms that generate numbers that are either meaningless, or that can be dramatically misleading.

 

To end this talk on a positive note, we will also explore different strategies to uncover the hollowness of these ideas and some research avenues that are more rigorous and promising. The audience shall not fear attending this talk. Members of the public will neither be cursed nor turned into statisticians (or toads). Many real-world and forensically related analogies will be used to explore these tricky issues.  

 

1I wish to thank Dr. Glenn Langenburg (and Harry Potter) for letting me repurpose a title he made famous. I would also like to thank Ben Parker for his wise contribution to this abstract.

 

 

 

Bio

  

Cedric Neumann was awarded a PhD in Forensic Science from the University of Lausanne, Switzerland. From 2004 to 2010, Cedric worked at the Forensic Science Service (FSS) in the United Kingdom. As head of the R&D Statistics and Interpretation Research Group, he contributed to the development of the first validated fingerprint statistical model. This model was used to support the admissibility of fingerprint evidence in U.S. courts. Cedric is currently an Associate Professor of Statistics at the South Dakota State University. Cedric’s main area of research focuses on the statistical interpretation of forensic evidence, more specifically fingerprint, shoeprint and traces. Cedric has taught multiple workshops for forensic scientists and lawyers alike. Cedric served on the Scientific Working Group for Friction Ridge Analysis, Study and Technology (SWGFAST), was a member of the Board of Directors of the IAI and is the resident statistician of the Chemistry/Instrumental Analysis SAC committee in the NIST-OSAC.

 

 

 

 

 

 

 

 

 

 

KEYNOTE SPEAKER 

 

  

ANTHONY KOERTNER

 

 

What Do Latent Print Examiners Want in a Statistical Model?

 

 

This presentation will discuss current progress and inherent limitations of proposed statistical approaches to quantify fingerprint evidence with the intent to illicit candid discussion on what can realistically be achieved at this time and whether these limitations are ”true” issues or merely byproducts of any acceptable scientific method. Shedding light on these perceived limitations may assist latent print examiners in their understanding of these statistical models and how they may not be perfect, but may be somewhat suitable in their attempt to satisfy some of the demands set forth by NRC and PCAST. After attending this presentation, attendees will gain knowledge on the various statistical models currently in existence in the latent print community. This presentation will discuss scores derived from these models and how they fit in the many available Bayesian Verbal Equivalent Scales utilized not only in the Forensic Sciences but other industries as well. Attendees will also gain knowledge on recently published research surrounding how potential jurors interpret scores derived from a particular latent print statistical model. The search for a suitable latent print statistical model to assist in expressing the weight of friction ridge evidence has intensified since the 2009 National Research Council (NRC) report on Forensic Science entitled Strengthening Forensic Science in the United States: A Path Forward. The NRC report along with the 2016 President’s Council of Advisors on Science and Technology (PCAST) report entitled Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods challenged the friction ridge community to become more objective and develop tools to express the strength of evidence. As practitioners, we are justifiably concerned with how our evidence can best be presented in a trial format; in a manner that accurately conveys the strength of the evidence and is understandable by a jury. From a practitioner perspective, we want a model that: (1) supports our opinion of source attribution; (2) does not over or understate the strength of the evidence; (3) shields us from any potential error; (4) provides the exact same measure every time; and (5) is entirely objective. While these may be desired, can or will we ever achieve these requirements? Are the models currently available “no good” or do we need to curb our understanding and expectations of what can be realistically achieved within acceptable limits of science? This presentation will discuss current progress and inherent limitations of proposed approaches with the intent to illicit candid discussion on what can realistically be achieved at this time and whether these limitations are ”true” issues or merely byproducts of any acceptable scientific method. Shedding light on these perceived limitations may assist latent print examiners in their understanding of these statistical models and how they may not be perfect, but may be somewhat suitable in their attempt to satisfy some of the demands set forth by NRC and PCAST.

 

Disclaimer: The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the United States Department of the Army or United States Department of Defense.                    

 

 

Bio

 

Anthony Koertner, CLPE, CFWE

Latent Print Examiner

Defense Forensic Science Center

 

Anthony Koertner is a Latent Print Examiner at the Defense Forensic Science Center. Mr. Koertner graduated from the University of Central Florida in 2006 and began his career in friction ridge examination in 2007. He recently received his Master of Science in Forensic Science from the University of Florida. Mr. Koertner is an Active Member of the International Association for Identification, certified in both latent print examination and footwear tire tread examination. Currently he serves as a member of the Footwear and Tire Subcommittee within the Organization of Scientific Area Committees (OSAC) for Forensic Science.

 

 

 

 

 

More detail about conference speakers please visit the

 

 

OFFICIAL EU IAI ANNUAL MEETING WEBSITE

Social media

European Division of the IAI

Lann van Ypenburg 6

2497 GB, Den Haag

The Netherlands

email: secretary@theiaieu.org