BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//PS Statistics - ECPv4.6.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:PS Statistics
X-ORIGINAL-URL:https://www.psstatistics.com
X-WR-CALDESC:Events for PS Statistics
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190909
DTEND;VALUE=DATE:20190914
DTSTAMP:20190819T212116
CREATED:20190424T185728Z
LAST-MODIFIED:20190604T164527Z
UID:3449-1567987200-1568419199@www.psstatistics.com
SUMMARY:Generalised Linear (MIXED) (GLMM)\, Nonlinear (NLGLM) And General Additive Models (MIXED) (GAMM) (GNAM01)
DESCRIPTION:\nCourse Overview:\nThis course provides a general introduction to nonlinear regression analysis\, covering major topics including\, but not limited to\, general and generalized linear models\, generalized additive models\, spline and radial basis function regression\, and Gaussian process regression. We approach the general topic of nonlinear regression by showing how the powerful and flexible statistical modelling framework of general and generalized linear models\, and their multilevel counterparts\, can be extended to handle nonlinear relationships between predictor and outcome variables. We begin by providing a comprehensive practical and theoretical overview of regression\, including multilevel regression\, using general and generalized linear models. Here\, we pay particular attention to the many variants of general and generalized linear models\, and how these provide a very widely applicable set of tools for statistical modeling. After this introduction\, we then proceed to cover practically and conceptually simple extensions to the general and generalized linear models framework using parametric nonlinear models and\npolynomial regression. We will then cover more powerful and flexible extensions of this modeling. framework by way of the general concept of basis functions. We’ll begin our coverage of basis function regression with the major topic of spline regression\, and then proceed to cover radial basis functions and the multilayer perceptron\, both of which are types of artificial neural networks. We then move on to the major topic of generalized additive models (GAMs) and generalized additive mixed models (GAMMs)\, which can be viewed as the generalization of all the basis function regression topics\, but cover a wider range of topic including nonlinear spatial and temporal models and interaction models. Finally\, we will cover the powerful Bayesian nonlinear regression method of Gaussian process regression. \n\n\n\nIntended Audience\nThis course is aimed at anyone who is interested to learn and apply nonlinear regression methods. These methods have major applications throughout the economics and other social sciences\, life sciences\, physical sciences\, and machine learning. \nVenue – PS statistics head office\, 53 Morrison Street\, Glasgow\, G5 8LB – Google map \nAvailability – 20 places \nDuration – 5 days \nContact hours – Approx. 28 hours \nECT’s – Equal to 3 ECT’s \nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments and welcome meal Monday evening.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is multiple occupancy (max 3- 4 people) single sex en-suite rooms. Arrival Sunday 22nd September (between 17:00-21:00) and departure Friday 27th September (accommodation must be vacated by 09:15). \nTo book ‘COURSE ONLY’ with the option to add the additional ‘ACCOMMODATION PACKAGE’ please scroll to the bottom of this page. \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PS statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economically viable. \n\n\n\n \nDr. Mark Andrews\n\n\n\n\nTeaching Format\n\nThis course will be hands-on and workshop based. Throughout each day\, there will be some lecture style presentation\, i.e.\, using slides\, introducing and explaining key concepts. However\, even in these cases\, the topics being covered will include practical worked examples that will work through together. \nAssumed quantitative knowledge \nWe assume familiarity with linear regression analysis\, and the major concepts of classical inferential statistics (p-values\, hypothesis testing\, confidence intervals\, model comparison\, etc). Some familiarity with common generalized linear models such as logistic or Poisson regression will also be assumed. \nAssumed computer background \nR experience is desirable but not essential. Although we will be using R extensively\, all the code that we use will be made available\, and so attendees will just to add minor modifications to this code. Attendees should install R and RStudio on their own computers before the workshops\, and have some minimal familiarity with the\nR environment. \nEquipment and software requirements \nA laptop computer with a working version of R or RStudio is required. R and RStudio are both available as free and open source software for PCs\, Macs\, and Linux computers. R may be downloaded by following the links here https://www.r-project.org/. RStudio may be downloaded by following the links here: https://www.rstudio.com/. All the R packages that we will use in this course will be possible to download and install during the workshop itself as and when they are needed\, and a full list of required packages will be made available to all attendees prior to the course. In some cases\, some additional open-source software will need to be installed to use some R packages. These include Stan for probabilistic modeling.; Keras for neural neural network modeling.; Prophet for forecasting. Directions on how to install this software will also be provided before and during \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 8th\nMeet at 43 Cook Street\, Glasgow G5 8JN between 17:00 – 21:00 \nMonday 9th – Classes from 09:30 to 17:30 \nModule 1: General and generalized linear models\, including multilevel models. In order to provide a solid foundation for the remainder of the course\, we begin by providing a comprehensive practical and theoretical overview of the principles of general and generalized linear models\, also covering their multilevel (or hierarchical) counterparts. General and generalized linear models provide a powerful set of tools for statistical modeling.\, which are extremely widely used and widely applicable. Their underlying theoretical principles are quite simple and elegant\, and once understood\, it becomes clear how these models can be extended in many different ways to handle different statistical modeling. situations. \nFor this module\, we will use the very commonly used R tools such as lm\, glm\, lme4::lmer\, lme4::glmer. In addition\, we will also use the R based brms package\, which uses the Stan probabilistic programming language. This package allows us to perform all the same analyses that are provided by lm\, glm\, lmer\, glmer\, etc.\, using an almost identical syntax\, but also us to perform a much wider range of general and generalized linear model analyses. \nTuesday 10th – Classes from 09:30 to 17:30 \nHaving established a solid regression modeling. foundation\, on the second day we may cover a range of nonlinear modeling. extensions to the general and generalized linear modeling. framework. \nModule 2: Polynomial regression. Polynomial regression is both a conceptually and practically simple extension of linear modeling. It be easily accomplished using the poly function along with tools like lm\, glmer\, lme4::lmer\, lme4::glmer. Here\, we will use cover piecewise linear and polynomial regression\, using R packages such as segmented.\n\nModule 3: Parametric nonlinear regression. In some cases of nonlinear regression\, a bespoke parametric function for the relationship between the predictors and outcome variable is used. These are often obtained from scientific knowledge of the problem at hand. In R\, we can use the package nls to perform parametric nonlinear regression.\n\nModule 4: Spline regression: Nonlinear regression using splines is a powerful and flexible non-parametric or semi-parametric nonlinear regression method. It is also an example of a basis function regression method. Here\, we will cover spline regression using the splines::bs and splines::ns functions that can be used with lm\, glm\, lme4::lmer\, lme4::glmer\, brms\, etc.\n\nModule 5: Radial basis functions. Regression using radial basis functions is a set of methods that are closely related to spline regression. They have a long history of usage in machine learning and can also be viewed as a type of artificial neural network model. Here\, we will explore radial basis function models using the Stan programming language\, which will allow us to build powerful and flexible versions of the radial basis functions.\n\nModule 6: Multilayer perceptron. Closely related to radial basis functions are multilayer perceptrons. These and their variants and extensions are major building block of deep learning (machine learning) methods. We will explore multilayer perceptron in Stan\, but we will also use the powerful Keras library. \nWednesday 11th – Classes from 09:30 to 17:30 \nModule 7: Generalized additive models. We now turn to the major module of generalized additive models (GAMs). GAMs generalize many of concepts and module covered so far and represent a powerful and flexible framework for nonlinear modeling. In R\, the mgcv package provides a extensive set of tools for working with GAMs. Here\, we will provide an in-depth coverage of mgcv including choosing smooth terms\, controlling overfitting and complexity\,\nprediction\, model evaluation\, and so on.\n\nModule 9: Generalized additive mixed models. GAMs can also be used in linear mixed effects models where they are known as generalized additive mixed mmodels (GAMMs). GAMMs can also be used with the mgcv package. \nThursday 12th – Classes from 09:30 to 17:30 \nModule 10: Interaction nonlinear regression: A powerful feature of GAMs and GAMMs is the ability to model nonlinear interactions\, whether between two continuous variables\, or between one continuous and one categorical variable. Amongst other things\, interactions between continuous variables allow us to do spatial and spatio-temporal modeling. Interactions between categorical and continuous variables allow us to model how nonlinear relationships between a predictor and outcome change as a function of the value of different categorical variables. \nModule 11: Nonlinear regression for time-series and forecasting. One major application of nonlinear regression is for modeling. time-series and forecasting. Here\, we will explore the prophet library for time-series forecasting. This library\, available for both Python and R\, gives us a GAM-like framework for modeling. time-series and making forecasts. \nFriday 13th – Classes from 09:30 to 16:00 \nModule 12: Gaussian process regression. Our final module deals with a type of Bayesian nonlinear regression known as Gaussian process regression. Gaussian process regression can be viewed as a kind of basis function regression\, but with an infinite number of basis functions. In that sense\, it generalizes spline\, radial basis functions\, multilayer perceptron\, generalized additive models\, and provides means to overcome some practically challenging problems in nonlinear regression such as selecting the number and type of smooth functions. Here\, we will explore Gaussian process regression using Stan. \n\n\n\n
URL:https://www.psstatistics.com/course/generalised-linear-glm-nonlinear-nlglm-and-general-additive-models-gam-gnam01/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/jpeg:https://www.psstatistics.com/wp-content/uploads/2019/04/gnmr01.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190916
DTEND;VALUE=DATE:20190921
DTSTAMP:20190819T212116
CREATED:20190424T190239Z
LAST-MODIFIED:20190605T192752Z
UID:3448-1568592000-1569023999@www.psstatistics.com
SUMMARY:Structural Equation Models\, Path Analysis\, Causal Modelling and Latent Variable Models Using R
DESCRIPTION:\nCourse Overview:\nThis course provides a comprehensive introduction to a set of inter-related topics of widespread applicability in the social social sciences: structural equation modelling\, path analysis\, causal modelling\, mediation analysis\, latent variable modelling (including factor analysis and latent class analysis)\, Bayesian networks\, graphical models\, and other related topics. The course begins with a thorough review\, both practical and theoretical\, of regression modelling\, particularly on general and generalized linear regression. We then turn to the topic of path analysis. At its simplest\, path analysis can be seen as an extension of standard (e.g. linear) regression analysis to cases where there are more complex structural relationship between the predictor and outcome variables. More generally\, and more usefully\, we can view path analysis as specifying and modelling causal relationships between observed variables. In order to fully appreciate path analyses\, and especially their role as causal models\, we will introduce the concept of directed acyclic graphical models\, also known as Bayesian networks\, which are a powerful mathematical and conceptual tool for reasoning about causal relationships. We then thoroughly cover the topic of mediation analysis\, which can seen as a special case\, though still very widely applicable\, of path analysis and causal models. We then turn to structural equation modelling\, which can be seen as an extension of path analysis\, particularly due to the inclusion of unobserved or latent variables. More generally\, structural equation models allow for the specification and testing of more complex theoretical models of the observed data. In order to properly introduce structural equation models\, we first explore latent variable models\, particularly factor analysis and latent class models. In our coverage of structural equation models we deal with the general concepts of model identification\, inference\, and evaluation\, and then explore special topics such as categorical\, nonlinear\, and non-normal structural equation models\, multilevel structural equation models\, and latent growth curve modelling. \n\n\nIntended Audience\nThis course is aimed at anyone who is interested to learn and apply this powerful and flexible set of statistical modelling methods that have widespread application across the social\, medical\, and biological sciences. \nVenue – PS statistics head office\, 53 Morrison Street\, Glasgow\, G5 8LB – Google map \nAvailability – 20 places \nDuration – 5 days \nContact hours – Approx. 28 hours \nECT’s – Equal to 3 ECT’s \nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments and welcome meal Monday evening.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is multiple occupancy (max 3- 4 people) single sex en-suite rooms. Arrival Sunday 15th September (between 17:00-21:00) and departure Friday 20th September (accommodation must be vacated by 09:15). \nTo book ‘COURSE ONLY’ with the option to add the additional ‘ACCOMMODATION PACKAGE’ please scroll to the bottom of this page. \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PS statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economically viable. \n\n\n\n \nDr. Mark Andrews\n\n\n\n\nTeaching Format\, pre-requisites and software requirements\n\nThis course will be hands-on and workshop based. Throughout each day\, there will be some lecture style presentation\, i.e.\, using slides\, introducing and explaining key concepts. However\, even in these cases\, the topics being covered will include practical worked examples that will work through together. \nAssumed quantitative knowledge \nWe assume familiarity with linear regression analysis\, and the major concepts of classical inferential statistics (p-values\, hypothesis testing\, confidence intervals\, model comparison\, etc). Some passing familiarity with models such as logistic regression will also be assumed. \nAssumed computer background \nR experience is desirable but not essential. Although we will be using R extensively\, all the code that we use will be made available\, and so attendees will just to add minor modifications to this code. Attendees should install R and RStudio on their own computers before the workshops\, and have some minimal familiarity with the R environment. \nEquipment and software requirements \nA laptop computer with a working version of R or RStudio is required. R and RStudio are both available as free and open source software for PCs\, Macs\, and Linux computers. R may be downloaded by following the links here https://www.r-project.org/. RStudio may be downloaded by following the links here: https://www.rstudio.com/. All the R packages that we will use in this course will be possible to download and install during the workshop itself as and when they are needed. The major R packages will include lavaan\, blavaan\, sem\, brms\, but the full list of required packages will be made available to all attendees prior to the course. In some cases\, some additional open-source software will need to be installed to use some R packages. In particular\, these include Stan and Jags for probabilistic modelling. Directions on how to install this software will also be provided before and during the course. \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 15th/strong>\nMeet at 43 Cook Street\, Glasgow G5 8JN between 17:00 – 21:00 \nMonday 16th – Classes from 09:30 to 17:30 \nTopic 1: General and generalized linear regression\, including multilevel models. In order to provide a solid foundation for the remainder of the course\, we begin by providing a comprehensive overview of general linear models\, also covering their multilevel (or hierarchical) counterparts. For this topic\, we will use R tools such as lm\, lme4::lmer.\nTopic 2: Path analysis. Having covered regression\, we proceed to path analysis\, which can be viewed as a straightforward extension of standard regression analysis. The primary R package that we will use for this introduction to path analysis is lavaan. \nTopic 3: Graph theory and causal models. Path analysis can\, and should\, be seen more than just an extension of regression analysis and be seen as a type of causal model. In order to explore this in depth\, we will introduce the concepts of directed acyclic graphs and Bayesian networks\, originally developed in computer science and artificial intelligence research\, and show how they provide a powerful framework of reasoning about and with causal models. \nTuesday 17th – Classes from 09:30 to 17:30 \nTopic 4: Mediation analysis. A special case of path analysis is mediation analysis. This is where the causal effect of one or more variables on some outcome is by way of their effect on an intermediary variable. For example\, we say the effect of smoking on lung cancer is mediated by the tar content of cigarettes\, (smoking causes tar build-up in the lungs\, tar build-up in the lungs causes lung cancer). In this section\, we will provide an introduction to mediation analysis\, and pay particular attention to how it has been traditionally analysed. \nTopic 5: Causal mediation analysis. Traditional mediation analysis\, although useful\, does not extend easily to situations where there are interactions (moderations) between predictor and moderator variables\, or where there are nonlinear effects between variables\, and other variants. Causal mediation analysis is a more general framework for mediation modelling based on modelling counterfactuals and using graphical models. Here\, we will also discuss the relationship between causal mediation analysis and traditional mediation analysis\, and also how causal mediation analysis is related to the concept of instrumental variables. \nWednesday 18th – Classes from 09:30 to 17:30 \nTopic 6: Factor analysis: Latent variable models assume the presence of variables that are not directly observed but are assumed to affect other variables that are observed. One of the most commonly used latent variable models\, and one that can be seen as a special case of structural equation models that we will explore on Day 3\, is factor analysis. Here\, we will describe factor analysis\, distinguishing between what are known as exploratory and confirmatory factor analysis models. We will also discuss how factor analysis relates to other widely used latent variable modelling techniques such as principal components analysis and independent components analysis. \nTopic 7: Latent class models. Latent class models\, also known as probabilistic mixture models\, are another widely used latent variable modelling technique. They differ from factor analyses and related models by the fact that they assume the latent variable is a categorical variable. What this entails is that latent class models assumes that observed data has emerged from a set of categorically distinct underlying unobserved component. \nThursday 19th – Classes from 09:30 to 17:30 \nDays 4 and 5 will cover structural models in depth. On Day 4\, we will provide a comprehensive introduction to all the major and general concepts of structural equation models. On Day 5\, we will explore different variants of the structural equation models. \nTopic 8: Structural equation modelling: General concepts. Structural equation models can be seen as an extension of path analysis\, particularly due to the use of latent variables. In this introduction to the topic\, we will first explore different examples of structural equation models using real world example data sets\, and consider the standard or typical types of assumed models. We will also cover the major and general topics of model identification\, model inference\, and model evaluation. Here\, we will also describe traditional and more modern\, and more flexible\, approaches to identification\, inference\, and evaluation. The R packages that we will use here will primarily include lavaan\, blavaan\, sem\, brms\, and we will also use probabilistic programming languages such as Jags and Stan. \nFriday 20th – Classes from 09:30 to 16:00 \nTopic 9: Nonlinear\, non-normal and categorical structural equation models: The standard\, or typical\, structural equation model assumes that variables are continuous\, have normal distributions\, and that there are linear relationships between these variables. While this is often a useful default or starting assumption\, more powerful and flexible structural equation models are possible if we allow for continuous variables that have non-normal distributions\, nonlinear relationships between these variables\, and \nTopic 10: Multilevel structural equation models: Multilevel structural equation models allow us to model variation across different groups. As an example from the context of education studies\, we could model phenomena modelled by structural equation models vary across different schools and across different regions\, etc. \nTopic 11: Latent growth curve modelling: A special case of structural equation models is a latent growth curve model. These are widely used with data from longitudinal or other repeated measures studies\, and their primary purpose is to model change or development trajectories over time. \n\n\n\n
URL:https://www.psstatistics.com/course/structural-equation-modelling-and-path-analysis-smpa01/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/png:https://www.psstatistics.com/wp-content/uploads/2019/04/smpa-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190923
DTEND;VALUE=DATE:20190928
DTSTAMP:20190819T212116
CREATED:20190424T190106Z
LAST-MODIFIED:20190816T113350Z
UID:3444-1569196800-1569628799@www.psstatistics.com
SUMMARY:Python for data science\, machine learning\, and scientific computing (PDMS01)
DESCRIPTION:\nCourse Overview:\nPython is one of the most widely used and highly valued programming languages in the world\, and is especially widely used in data science\, machine learning\, and in other scientific computing applications. This course provides both a general introduction to programming with Python and a comprehensive introduction to using Python for data science\, machine learning\, and scientific computing. The major topics that we will cover include the following: the fundamentals of general purpose programming in Python; using Jupyter notebooks as a reproducible interactive Python programming environment; numerical computing using numpy; data processing and manipulations using pandas; data visualization using matplotlib\, seaborn\, ggplot\, bokeh\, altair\, etc; symbolic mathematics using sympy; data science and machine learning using scikit-learn\, keras\, and tensorflow; Bayesian modelling using PyMC3 and PyStan; high performance computing with Cython\, Numba\, IPyParallel\, Dask. Overall\, this course aims to provide a solid introduction to Python generally as a programming language\, and to its principal tools for doing data science\, machine learning\, and scientific computing. (Note that this course will focus on Python 3 exclusively given that Python 2 has now reached it end of life). \n \n\n\nIntended Audience\nThis course is aimed at anyone who is interested in learning the fundamentals of Python generally and especially how\nPython can be used for data science\, broadly defined. Python and Python based data science is applicable to academic\nresearch in all fields of science and engineering as well as data intensive industries and services such as finance\,\npharmaceuticals\, healthcare\, IT\, and manufacturing. \nVenue – PS statistics head office\, 53 Morrison Street\, Glasgow\, G5 8LB – Google map \nAvailability – 30 places \nDuration – 5 days \nContact hours – Approx. 28 hours \nECT’s – Equal to 3 ECT’s \nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments and welcome meal Monday evening.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is multiple occupancy (max 3- 4 people) single sex en-suite rooms. Arrival Sunday 22nd September (between 17:00-21:00) and departure Friday 27th September (accommodation must be vacated by 09:15). \nTo book ‘COURSE ONLY’ with the option to add the additional ‘ACCOMMODATION PACKAGE’ please scroll to the bottom of this page. \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PS statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economically viable. \n\n\n\n \nDr. Mark Andrews\n\n\n\n\nTeaching Format\n\nThis course will be hands-on and workshop based. Throughout each day\, there will be some lecture style presentation\, i.e.\, using slides\, introducing and explaining key concepts. However\, even in these cases\, the topics being covered will include practical worked examples that will work through together. \nAssumed quantitative knowledge \nWe will assume only a minimal amount of familiarity with some general statistical and mathematical concepts. These\nconcepts will arise when we discuss numerical computing\, symbolic maths\, and statistics and machine learning.\nHowever\, expertise and proficiency with these concepts are not necessary. Anyone who has taken any undergraduate\n(Bachelor’s) level course on (applied) statistics or mathematics can be assumed to have sufficient familiarity with these concepts. \nAssumed computer background \nNo prior experience with Python or any other programming language is required. Of course\, any familiarity with any\nother programming will be helpful\, but is not required. \nEquipment and software requirements \nAttendees of the course should bring a laptop computer with Python (version 3) and the Python packages that we will\nuse (such as numpy\, pandas\, sympy\, etc) installed. All the required software is free and open source and is available on Windows\, MacOs\, and Linux. Instructions on how to install and configure all the software will be provided before the start of the course. We will also provide time during the workshops to ensure that all software is installed and configured properly. \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 22nd \nMeet at 43 Cook Street\, Glasgow G5 8JN between 17:00 – 21:00 \nMonday 23rd – Classes from 09:30 to 17:30 \n• Topic 1: The What and Why of Python. In order to provide some general background and context\, we will describe\nPython where came from\, what its major design principles and intended use was originally\, and where and how it\nis now currently used. We will see that Python is now extremely widely used\, especially in powering the web\, in\ndata science and machine learning\, and system level programming. Here\, we also compare and contrast Python\nand R\, given that both are extremely widely used in data science. \n• Topic 2: Installing and setting up Python. There are many ways to write and execute code in Python. Which to use\ndepends on personal preference and the type of programming that is being done. Here\, we will explore some of the\ncommonly used Integrated Development Environments (IDE) for Python\, which include Spyder and PyCharm. Here\,\nwe will also mention and briefly describe Jupyter notebooks\, which are widely used for scientific applications of\nPython\, and are an excellent tool for doing reproducible interactive work. We will cover Jupyter more extensively\nstarting on Day 3. Also as part of this topic\, we will describe how to use virtual environments and package installers such as pip and conda. \n• Topic 3: Introduction to Python: Data Structures. We will begin our coverage of programming with Python by\nintroducing its different data structures.and operations on data structures This will begin with the elementary data types such as integers\, floats\, Booleans\, and strings\, and the common operations that can be applied to these data types. We will then proceed to the so-called collection data structures\, which primarily include lists\, dictionaries\, tuples\, and sets. \n• Topic 4: Introduction to Python: Programming. Having introduced Python’s data types\, we will now turn to how to\nprogram in Python. We will begin with iteration\, such as the for and while loops. We will then cover conditionals\nand functions. \nTuesday 24th – Classes from 09:30 to 17:30 \n• Topic 5: Modules\, packages\, and imports. Python is extended by hundreds of thousands of additional packages.\nHere\, we will cover how to install and import these packages\, and also how to write our own modules and\npackages. \n• Topic 6: Numerical programming with numpy. Although not part of Python’s official standard library\, the numpy\npackage is the part of the de facto standard library for any scientific and numerical programming. Here we will\nintroduce numpy\, especially numpy arrays and their built in functions (i.e. “methods”). \n• Topic 7: Data processing with pandas. The pandas library provides means to represent and manipulate data frames.\nLike numpy\, pandas can be see as part of the de facto standard library for data oriented uses of Python. \n• Topic 8: Object Oriented Programming. Python is an object oriented language and object oriented programming in\nPython is extensively used in anything beyond the very simplest types of programs. Moreover\, compared to other\nlanguages\, object oriented programming in Python is relatively easy to learn. Here\, we provide a comprehensive\nintroduction to object oriented programming in Python. \n• Topic 9: Other Python programming features. In this section\, we will cover some important features of Python not\nyet covered. These include exception handling\, list and dictionary comprehensions\, itertools\, advanced collection\ntypes including defaultdict\, anonymous functions\, decorators\, etc. \nWednesday 25th – Classes from 09:30 to 17:30 \n• Topic 10: Jupyter notebooks and Jupyterlab. Although we have already introduced Jupyter notebooks\, here we\nwill explore them properly. Jupyter notebooks are reproducible and interactive computing environment that\nsupport numerous programming languages\, although Python remains the principal language used in Jupyter\nnotebooks. Here\, we’ll explore their major features and how they can be shared easily using GitHub and Binder. \n• Topic 11: Data Visualization. Python provides many options for data visualization. The matplotlib library is a low level plotting library that allows for considerable control of the plot\, albeit at the price of a considerable amount of low level code. Based on matplotlib\, and providing a much higher level interface to the plot\, is the seaborn library. This allows us to produce complex data visualizations with a minimal amount of code. Similar to seaborn is ggplot\, which is a direct port of the widely used R based visualization library. In this section\, we will also consider a set of other visualization libraries for Python. These include plotly\, bokeh\, and altair. \n• Topic 12: Symbolic mathematics. Symbolic mathematics systems\, also known as computer algebra systems\, allow\nus to algebraically manipulate and solve symbolic mathematical expression. In Python\, the principal symbolic mathematics library is sympy. This allows us simplify mathematical expressions\, compute derivatives\, integrals\,\nand limits\, solve equations\, algebraically manipulate matrices\, and more. \n• Topic 13: Statistical data analysis. In this section\, we will describe how to perform widely used statistical analysis in Python. Here we will start with the statsmodels package\, which provides linear and generalized linear models as well as many other widely used statistical models. We will also introduce the scikit-learn package\, which we will more widely use on Day 4\, and use it for regression and classification analysis. \nThursday 26th – Classes from 09:30 to 17:30 \n• Topic 14: Machine learning. Python is arguably the most widely used language for machine learning. In this section\, we will explore some of the major Python machine learning tools that are part of the scikit-learn package. This section continues our coverage of this package that began in Topic 12 on Day 3. Here\, we will cover machine learning tools such as support vector machines\, decision trees\, random forests\, k-means clustering\, dimensionality reduction\, model evaluation\, and cross-validation. \n• Topic 15: Neural networks and deep learning. A popular subfield of machine learning involves the use of artificial neural networks and deep learning methods. In this section\, we will explore neural networks and deep learning using the keras library\, which is a high level interface to neural network and deep learning libraries such as Tensorflow\, Theano\, or the Microsoft Cognitive Toolkit (CNTK). Examples that we will consider here include image classification and other classification problems taken from\, for example\, the UCI Machine Learning Repository. \nFriday 27th – Classes from 09:30 to 16:00 \n• Topic 16: Bayesian models. Two probabilistic programming languages for Bayesian modelling in Python are PyMC3\nand PyStan. PyMC3 is a Python native probabilistic programming language\, while PyStan is the Python interface to\nthe Stan programming language\, which is also very widely used in R. Both PyMC3 and PyStan are extremely\npowerful tools and can implement arbitrary probabilistic models. Here\, we will not have time to explore either in\ndepth\, but will be able to work through a number of nontrivial examples\, which will illustrate the general feature\nand usage of both languages. \n• Topic 17: High performance Python. The final topic that we will consider in this course is high performance\ncomputing with Python. While many of the tools that we considered above extremely quickly because they\ninterface with compiled code written in C/C++ or Fortran\, Python itself is a high level dynamically typed and\ninterpreted programming language. As such\, native Python code does not execute as fast as compiled languages\nsuch as C/C++ or Fortran. However\, it is possible to achieve compiled language speeds in Python by compiling\nPython code. Here\, we will consider Cython and Numba\, both of which allow us achieve C/C++ speeds in Python\nwith minimal extensions to our code. Also\, in this section\, we will consider parallelization in Python\, in particular using IPyParallel and Dask\, both of which allow easy parallel and distributed processing using Python. \n\n\n\n
URL:https://www.psstatistics.com/course/python-for-data-science-machine-learning-and-scientific-computing-pdms01/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/jpeg:https://www.psstatistics.com/wp-content/uploads/2019/04/dsap.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191014
DTEND;VALUE=DATE:20191019
DTSTAMP:20190819T212116
CREATED:20180515T155943Z
LAST-MODIFIED:20190729T002731Z
UID:3115-1571011200-1571443199@www.psstatistics.com
SUMMARY:Statistical modelling of time-to-event data using survival analysis: an introduction for animal behaviourists\, ecologists and evolutionary biologists (TTED02)
DESCRIPTION:\nCourse Overview:\nSurvival analysis is a set of statistical methods initially designed to analyse data giving the times at which individuals die\, and assess the effect that different predictor variables have on the rate of death. However\, its applications are much broader than this: it can be used to analyse any time-to-event data. Ecologists and evolutionary biologists often encounter data of this kind. Often factors influencing survival itself will be of interest. But there are many other cases\, e.g. what factors influence the time of first breeding? Or the time taken to reach maturity? Animal behaviourists too will encounter this type of data frequently\, e.g. what factors influence the time it takes to learn a novel behaviour pattern? Or the time to respond to a stimulus? etc. And yet the techniques of survival analysis are not generally well known by researchers in these disciplines. \nIn this course\, you will learn how to apply survival analysis models to quantify the effect that predictor variables (continuous or discrete) have on the rate at which events occur\, and how to test hypotheses about these effects. We will focus on a flexible modelling technique called the Cox proportional hazards model\, which makes minimal assumptions about the underlying probability distributions. You will learn how to fit and interpret these models\, how to evaluate its assumptions\, and how to extend it to model time dependent variables\, random effects\, multistate models and competing risks models. \n\n\n\nIntended Audience\nThis course would be suitable for participants who have a good understanding of the basic theory\nunderlying multiple regression/linear models and know how to apply them in R. No previous experience or knowledge of survival analysis is necessary. \nVenue – PS statistics head office\, 53 Morrison Street\, Glasgow\, G5 8LB – Google map\nAvailability – 20 places\nDuration – 5 days\nContact hours – Approx. 28 hours\nECT’s – Equal to 3 ECT’s\nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is approximately a 6-minute walk from the PS statistics head office. Accommodation is multiple occupancy (max 3-4 people) single sex en-suite rooms. Arrival Sunday 13th October (between 17:00-21:00) and departure Friday 18th October(accommodation must be vacated by 9am). An additional nights accommodation can be purchased\, departure 9am Friday morning email for details. \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PS statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economical viable. \n\n\n\n \nDr Will Hoppitt\n\n\n \n\n\n\n\n\n— \n\n\nTeaching Format\n\nIntroductory lectures on the concepts and refreshers on R usage. Intermediate-level lectures interspersed with hands-on mini practicals and longer projects. Round-table discussions about the analysis requirements of attendees (option for them to bring their own data). Data sets for computer practicals will be provided by the instructors\, but participants are welcome to bring their own data. \nAssumed quantitative knowledge\nA good understanding of statistical concepts\, statistical significance and hypothesis testing. \nAssumed computer background\nR experience is desirable but not essential. Attendees ideally should be able to import/export data\, understand basic R syntax and write simple functions and loops. \nEquipment and software requirements\nA laptop/personal computer with a working version of R or RStudio. R and RStudio are supported by both PC and MAC and can be downloaded for free by following these links. \nhttps://cran.r-project.org/ \nDownload RStudio \nIt is essential that you come with all necessary software and packages already installed (you will be sent a list of packages prior to the course) as internet access may not always be available. \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 13th\nMeet at 43 Cook Street\, Glasgow G5 8JN at approx. 17:00 onwards \nMonday 14th – Classes from 09:30 to 17:30 \nModule 1: Statistical modelling of rates and times \nModule 2: Parametric survival models and the Cox model \nTuesday 15th – Classes from 09:30 to 17:30 \nModule 3: Fitting Cox models \nModule 4: Interpreting Cox Models \nWednesday 16th – Classes from 09:30 to 17:30 \nModule 5: Evaluating the proportional hazard assumption \nModule 6: Stratified Cox models \nThursday 17th – Classes from 09:30 to 17:30 \nModule 7: Time dependent variables \nModule 8: Frailty Models and Multistate models \nFriday 18th – Classes from 09:30 to 17:30 \nModule 9: Competing risks models \nModule 10: Open session \n\n\n\n
URL:https://www.psstatistics.com/course/statistical-modelling-of-time-to-event-data-using-survival-analysis-tted02/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/jpeg:https://www.psstatistics.com/wp-content/uploads/2018/12/TTED.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191104
DTEND;VALUE=DATE:20191109
DTSTAMP:20190819T212116
CREATED:20170526T082542Z
LAST-MODIFIED:20190702T154441Z
UID:2830-1572825600-1573257599@www.psstatistics.com
SUMMARY:Behavioural data analysis using maximum likelihood in R (BDML02)
DESCRIPTION:\nCourse Overview:\nThis 5-day course will involve a combination of lectures and practical sessions. Students will learn to build and fit custom models for analysing behavioural data using maximum likelihood techniques in R. This flexible approach allows a researcher to a) use a statistical model that directly represents their hypothesis\, in cases where standard models are not appropriate and b) better understand how standard statistical models (e.g. GLMs) are fitted\, many of which are fitted by maximum likelihood. Students will learn how to deal with binary\, count and continuous data\, including time-to-event data which is commonly encountered in behavioural analysis. \nAfter successfully completing this course students should be able to: \n\nfit a multi-parameter maximum likelihood model in R\nderive likelihood functions for binary\, count and continuous data\ndeal with time-to-event data\nbuild custom models to test specific behavioural hypotheses\nconduct hypothesis tests and construct confidence intervals\nuse Akaike’s information criterion (AIC) and model averaging\nunderstand how maximum likelihood relates to Bayesian techniques\n\n\n\nIntended Audience\nAny researchers (from postgraduate students to senior investigators) interested in analysing behavioural data. Examples will be primarily from non-human animal behaviour studies\, but the methods will also be applicable to many researchers studying human behaviour. The course is intended for those wishing to construct custom statistical models and for those wishing to better understand the workings of standard statistical techniques that use maximum likelihood methods (e.g. GLMs). \nVenue – PS statistics head office\, 53 Cook Street\, Glasgow\, G5 8LB – Google map \nAvailability – 30 places \nDuration – 5 days \nContact hours – Approx. 35 hours \nECT’s – Equal to 3 ECT’s \nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments and welcome meal Monday evening.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is approximately a 6-minute walk from the PS statistics head office. Accommodation is multiple occupancy (max 3-4 people) single sex en-suite rooms. Arrival Sunday 3rd November (after 5pm) and departure Friday 8th November (accommodation must be vacated by 9am). \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PR statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economical viable. \n\n\n\n \nDr. William Hoppitt\n\n\n \n\n\n\n\n\n\nTeaching Format\n\nThere will be a combination of lectures and practicals. Practicals will be based on the topics covered in the preceding lectures. Data sets for computer practicals will be provided by the instructors \nAssumed quantitative knowledge \nA basic understanding of statistical concepts (mean\, variance\, correlation\, regression\, ANOVA etc.) and probability. \nAssumed computer background \nSome familiarity with R. Ability to import/export and manipulate data\, fit basic statistical models & generate simple exploratory and diagnostic plots. \nEquipment and software requirements \nA laptop/personal computer with a working version of R or RStudio. R and RStudio are supported by both PC and MAC and can be downloaded for free by following these links. \nhttps://cran.r-project.org/\nDownload RStudio \nIt is essential that you come with all necessary software and packages already installed (you will be sent a list of packages prior to the course) as internet access may not always be available. \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 3rd\nMeet at 43 Cook Street\, Glasgow G5 8JN at approx. 17:00 onwards \nMonday 4th – Classes from 09:30 to 17:30 \nModule 1: The process of statistical inference and the role of statistical models. Why learn likelihood techniques? Course outline\nModule 2: Maximum likelihood estimation: single parameter models for binary data \nTuesday 5th – Classes from 09:30 to 17:30 \nModule 3: Models with several parameters for binary data\, optimization algorithms\nModule 4: Testing hypotheses and constructing confidence intervals \nWednesday 6th – Classes from 09:30 to 17:30 \nModule 5: Modelling count data and the Poisson distribution\nModule 6: Modelling continuous data\, the normal distribution and the relationship of maximum likelihood to least squares \nThursday 7th – Classes from 09:30 to 17:30 \nModule 7: Modelling time to event data and the exponential distribution\nModule 8: Akaike’s information criterion (AIC) and model averaging \nFriday 8th – Classes from 09:30 to 16:00 \nModule 9: A brief introduction to Bayesian analysis\, the practical advantages\, and its relationship to maximum likelihood \nAfternoon: Trouble shooting and final summary \n\n\n\n
URL:https://www.psstatistics.com/course/behavioural-data-analysis-using-maximum-likelihood-bdml02/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/jpeg:https://www.psstatistics.com/wp-content/uploads/2017/05/10.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191104
DTEND;VALUE=DATE:20191109
DTSTAMP:20190819T212116
CREATED:20190424T192351Z
LAST-MODIFIED:20190702T152942Z
UID:3439-1572825600-1573257599@www.psstatistics.com
SUMMARY:Introduction to Bayesian data analysis for social and behavioural sciences using R and Stan (BDRS02)
DESCRIPTION:\nCourse Overview:\nThis course provides a general introduction to Bayesian data analysis using R and the Bayesian probabilistic programming language Stan. We begin with a gentle introduction to all the fundamental principles and concepts of Bayesian data analysis: the likelihood function\, prior distributions\, posterior distributions\, high posterior density intervals\, posterior predictive distributions\, marginal likelihoods\, Bayes factors\, etc. We will do this using some simple probabilistic models that are easy to understand and easy to work with. We then proceed to more practically useful Bayesian analyses\, starting with general linear models\, followed by generalized linear models\, including logistic regression and Poisson regression\, followed by multilevel general and generalized linear models. For these analyses\, we will use real world data sets\, and carry out the analysis with Stan using the brms interface to Stan in R. With each example\, we will explore general concepts such as model checking and improvement using posterior predictive checks\, and model evaluation using cross-validation\, WAIC\, and Bayes factors. In the final part of the course\, we will delve into some more advanced topics: understanding Markov Chain Monte Carlo in depth\, Gaussian process regression\, probabilistic mixture models. \n\n\n\nIntended Audience\nThis course is aimed at anyone who is interested to learn and apply Bayesian data analysis in any area of science\, including the social sciences\, life sciences\, physical sciences. No prior experience or familiarity with Bayesian statistics is required. \nVenue – PS statistics head office\, 53 Morrison Street\, Glasgow\, G5 8LB – Google map \nAvailability – 20 places \nDuration – 5 days \nContact hours – Approx. 28 hours \nECT’s – Equal to 3 ECT’s \nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, welcome dinner Monday evening\, farewell dinner Friday evening\, refreshments and accommodation. Self-catering facilities are available in the accommodation. Accommodation is approximately a 6-minute walk from the PR statistics head office. Accommodation is multiple occupancy (max 3-4 people) single sex en-suite rooms. Arrival Sunday 2nd December (between 17:00 – 19:00) and departure Friday 7th December (accommodation must be vacated by 09:15). \nOther payment options are available please email oliverhooker@psstatistics.com \nCancellation policy: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com Failure to attend will result in the full PLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PR informatics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of thisPS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economical viable. \n\n\n\n \nDr. Mark Andrews\n\n\n \n\n\n\n\n\n\nTeaching Format\n\nThis course will be hands-on and workshop based. Throughout each day\, there will be some lecture style presentation\, i.e.\, using slides\, introducing and explaining key concepts. However\, even in these cases\, the topics being covered will include practical worked examples that will work through together. \nAssumed quantitative knowledge \nWe assume familiarity with inferential statistics concepts like hypothesis testing and statistical significance\, and some practical experience with commonly used methods like linear regression\, correlation\, or t-tests. Most or all of these concepts and methods are covered in a typical undergraduate statistics courses in any of the sciences and related fields. \nAssumed computer background \nR experience is desirable but not essential. Although we will be using R extensively\, all the code that we use will be made available\, and so attendees will just need to copy and paste and add minor modifications to this code. Attendees should install R and RStudio on their own computers before the workshops\, and have some minimal familiarity with the R environment. If some additional familiarity with R is required\, countless short video introductions to R and RStudio are available online (e.g.\, https://youtu.be/lVKMsaWju8w). \nEquipment and software requirements \nA laptop computer with a working version of R or RStudio is required. R and RStudio are both available as free and open source software for PCs\, Macs\, and Linux computers. R may be downloaded by following the links here https://www.r-project.org/. RStudio may be downloaded by following the links here: https://www.rstudio.com/. In addition to R and RStudio\, Stan for R should also be installed. Stan is also free and open source software and is available for PCs\, Macs\, and Linux computers. More information about Stan is available here http://mc-stan.org/\, and Stan for R (i.e.\, RStan) can be installed from here https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started. Many supplementary R packages will be required. The list of necessary packages will be made available to all attendees prior to the course. These can all be installed from within RStudio will one click. It is highly recommended that all attendees come with all the necessary software and packages installed in advance. This will minimize troubleshooting during the workshop that might delay our progress. \nhttps://cran.r-project.org/ \nDownload RStudio \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 3rd\nMeet at 43 Cook Street\, Glasgow G5 8JN at approx. 17:00 onwards \nMonday 4th – Classes from 09:30 to 17:30 \nClass 1: We will begin with a overview of what Bayesian data analysis is in essence and how it fits into statistics as it practiced generally. Our main point here will be that Bayesian data analysis is effectively an alternative school of statistics to the traditional approach\, which is referred to variously as the classical\, or sampling theory based\, or frequentist based approach\, rather than being a specialized or advanced statistics topic. However\, there is no real necessity to see these two general approaches as being mutually exclusive and in direct competition\, and a pragmatic blend of both approaches is entirely possible. \nClass 2: Introducing Bayes’ rule. Bayes’ rule can be described as a means to calculate the probability of causes from some known effects. As such\, it can be used as a means for performing statistical inference. In this section of the course\, we will work through some simple and intuitive calculations using Bayes’ rule. Ultimately\, all of Bayesian data analysis is based on an application of these methods to more complex statistical models\, and so understanding these simple cases of the application of Bayes’ rule can help provide a foundation for the more complex cases. \nClass 3: Bayesian inference in a simple statistical model. In this section\, we will work through a classic statistical inference problem\, namely inferring the number of red marbles in an urn of red and black marbles. This problem is easy to analyse completely with just the use of R\, but yet allows us to delve into all the key concepts of all Bayesian statistics including the likelihood function\, prior distributions\, posterior distributions\, maximum a posteriori estimation\, high posterior density intervals\, posterior predictive intervals\, marginal likelihoods\, Bayes factors\, model evaluation of out-of-sample generalization. \nTuesday 5th – Classes from 09:30 to 17:30 \nClass 4: Bayesian analysis of linear and normal models. Statistical models based on linear relationships and normal distribution are a mainstay of statistical analyses in general. They encompass models such as linear regression\, Pearson’s correlation\, t-tests\, ANOVA\, ANCOVA\, and so on. In this section\, we will describe how to do Bayesian analysis of linear and normal models\, paying particular attention to Bayesian linear regression. One of the aims of this section is to identify some important and interesting parallels between Bayesian and classical or frequentist analyses. This shows how Bayesian and classical analyses can be seen as ultimately providing two different perspectives on the same problem. \nClass 5: The previous section provides a so-called analytical approach to linear and normal models. This is where we can calculate desired quantities and distributions by way of simple formulae. However\, analytical approaches to Bayesian analyses are only possible in a relatively restricted set of cases. However\, numerical methods\, specifically Markov Chain Monte Carlo (MCMC) methods can be applied to virtually any Bayesian model. In this section\, we will re-perform the analysis presented in the previous section but using MCMC methods. For this\, we will use the brms package in R that provides an exceptionally easy to use interface to Stan. \nClass 6: This section continues the previous one\, but explores a wider range of linear and normal models\, namely the general linear models. These include models with multiple predictors\, some or all of which may be categorical\, and interactions between these predictors. We will use brms for all of these analyses. For all the examples covered here\, we will use real world data-sets taken from a variety of different fields. \nWednesday 6th – Classes from 09:30 to 17:30 \nClass 7: Bayesian generalized linear models. Generalized linear models include models such as logistic regression\, including multinomial and ordinal logistic regression\, Poisson regression\, negative binomial regression\, and other models. Again\, for these analyses we will use the brms package and explore this wide range of models using real world data-sets. \nClass 8: Model evaluation and checking. A general topic in any analysis is to evaluate the suitability of the chosen or assumed statistical models in the analysis. This general topic incorporates hypothesis testing. In this section\, we will discuss this topic in depth\, paying particular attention to posterior predictive checks\, cross-validation\, information criteria\, and Bayes factors. We will revisit many of the examples covered so far\, and perform model checking and evaluation and hypothesis testing with the models that we used. \nThursday 7th – Classes from 09:30 to 17:30 \nClass 8: Multilevel general and generalized linear models. In this section\, we will cover the multilevel variants of the regression models\, i.e. linear\, logistic\, Poisson etc\, that we have covered so far. The topic of multilevel (or hierarchical) models is a major one\, and multilevel models are widely used throughout the sciences. In general\, multilevel models arise whenever data are correlated due to membership of a group (or group of groups\, and so on). For example\, if we have data concerning how socioeconomic status relates to educational achievement\, the data might come from individual children. But these children are in separate schools\, the schools are in separate cities\, and the cities are in separate countries. Thus\, the entire data-sets comprises groups (of groups etc) of data subsets\, and there may be important variation across these subsets. The entire day is devoted to multilevel regression models. We will\, as before\, use a wide range of real-world data-sets\, and move between linear\, logistic\, etc.\, models are we explore these analyses. We will pay particular attention to considering when and how to use varying slope and varying intercept models\, and how to choose between maximal and minimal models. Here\, we will cover model checking and evaluation in the same depth as with the previous models. \nFriday 8th – Classes from 09:30 to 16:00 \nClass 9: MCMC in depth. Although we will used MCMC methods extensively thus far\, we will have hidden some of their technical details. As one approaches more advanced Bayesian topics\, a deeper understanding of MCMC methods is required. In this section\, we will begin by discussing simple Monte Carlo (MC) approaches like rejection sampling and importance sampling\, and then proceed to Markov Chain Monte Carlo (MCMC) such as Gibbs sampling\, Metropolis Hastings sampling\, slice sampling\, and Hamiltonian Monte Carlo. \nClass 10: Customized and bespoke statistical models. Thus far\, we have use the brms package for almost all of our analyses. While brms is an excellent tool\, in some cases\, especially in more advanced analyses\, it is not possible to use a pre-defined statistical model\, e.g. a linear or logistic regression model\, and it is necessary to develop customized and bespoke probabilistic models directly in the Stan language itself. In this final section of the course\, we will delve into how to write Stan code directly. We’ll first explore the Stan code that brms creates\, and we’ll learn how to modify this code. We will then write customized models that perform nonlinear regression using Gaussian processes and radial basis functions\, and also finite mixture models. Through these examples\, we will learn how to write and analyse any type of custom statistical model and thus produce models that are well suited to whatever specialized problem we are working on. \n\n\n\n
URL:https://www.psstatistics.com/course/introduction-to-bayesian-data-analysis-for-social-and-behavioural-sciences-using-r-and-stan-bdrs02/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/png:https://www.psstatistics.com/wp-content/uploads/2018/06/bdsr01-2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20200330
DTEND;VALUE=DATE:20200404
DTSTAMP:20190819T212116
CREATED:20180703T090336Z
LAST-MODIFIED:20190806T165659Z
UID:3253-1585526400-1585958399@www.psstatistics.com
SUMMARY:Introduction to statistical modelling for psychologists in R (IPSY03)
DESCRIPTION:\nCourse Overview:\nThis course will provide an introduction to working with real-life data typical of those encountered in the field of psychology. The course will be delivered by Dr. Dale Barr and Dr. Luc Bussière\, who are practicing academics in the fields of psychology and evolutionary biology respectively\, with many years of expertise with R and statistical modelling as both scientists and instructors. This five-day course will consist of series of modules (each lasting roughly half a day) covering topics including the basic ‘canon’ of psychological statistics (t-test\, correlation/regression\, ANOVA) presented within the framework of general linear models\, and building up to logistic regression and linear mixed-effects modelling. Along the way you will gain in-depth experience in data wrangling using the R ‘tidyverse’\, data and model visualisation and plotting\, as well as exploring and understanding model diagnostics. Classes will consist of a mixture of lectures and practical exercises designed to either build required skills for future modules or to perform a family of analyses that is frequently encountered in the psychological literature. \n\n\nIntended Audience\nAny researchers (from postgraduate students to senior investigators) wanting to learn how to correctly analyse data typical to the field of psychology using the R programming language. \nVenue – PS statistics head office\, 53 Cook Street\, Glasgow\, G5 8LB – Google map\nAvailability – 30 places\nDuration – 5 days\nContact hours – Approx. 35 hours\nECT’s – Equal to 3 ECT’s\nLanguage – English \nPackages\nWe offer COURSE ONLY and ACCOMMODATION PACKAGES;\n• COURSE ONLY – Includes lunch and refreshments and welcome meal Monday evening.\n• ACCOMMODATION PACKAGE (to be purchased in addition to the course only option) – Includes breakfast\, lunch\, refreshments and welcome dinner Monday evening. Self-catering facilities are available in the accommodation. Accommodation is multiple occupancy (max 3- 4 people) single sex en-suite rooms. Arrival Sunday 29th March (between 17:00-21:00) and departure Friday 3rd April (accommodation must be vacated by 09:15). \nTo book ‘COURSE ONLY’ with the option to add the additional ‘ACCOMMODATION PACKAGE’ please scroll to the bottom of this page. \nOther payment options are available please email oliverhooker@psstatistics.com \nPLEASE READ – CANCELLATION POLICY: Cancellations are accepted up to 28 days before the course start date subject to a 25% cancellation fee. Cancellations later than this may be considered\, contact oliverhooker@psstatistics.com. Failure to attend will result in the full cost of the course being charged. In the unfortunate event that a course is cancelled due to unforeseen circumstances a full refund of the course fees (and accommodation fees if booked through PS statistics) will be credited. However\, PS statistics will not be held responsible/liable for any travel fees\, accommodation costs or other expenses incurred to you as a result of the cancellation. Because of this PS statistics strongly recommends any travel and accommodation that is booked by you or your institute is refundable/flexible and to delay booking your travel and accommodation as close the course start date as economically viable. \n\n\n\n \nDr. Dale Barr\n\n\n \nDr. Luc Bussiere\n\n\n\n\n— \n\n\nTeaching Format\n\nThere will be a combination of lectures and practicals. Practicals will be based on the topics covered in the preceding lectures. Data sets for computer practicals will be provided by the instructors \nAssumed quantitative knowledge \nA basic understanding of statistical concepts\, including statistical significance and hypothesis testing \nAssumed computer background \nFamiliarity with R. Ability to import/export data\, manipulate data frames\, fit basic statistical models & generate simple exploratory and diagnostic plots. Relative newcomers to programming in R will be provided (by the instructors) with some introductory exercises to complete prior to the course. This will introduce some of the core features of R and RStudio before the course starts. \nEquipment and software requirements \nA laptop/personal computer with a working version of R or RStudio. R and RStudio are supported by both PC and MAC and can be downloaded for free by following these links. \nhttps://cran.r-project.org/ \nDownload RStudio \nIt is essential that you come with all necessary software and packages already installed (you will be sent a list of packages prior to the course) as internet access may not always be available. \nUNSURE ABOUT SUITABLILITY THEN PLEASE ASK oliverhooker@psstatistics.com \n\n\n\nCourse Programme\n\nSunday 29th\nMeet at 43 Cook Street\, Glasgow G5 8JN at approx. 17:00 onwards \nMonday 30th – Classes from 09:30 to 17:00\nIntroduction to R/RStudio\n• interacting with the RStudio IDE\n• installing add-on packages\n• R scripts and R notebooks\n• coding style guidelines\n• session management and project organization \nData wrangling and reproducible workflows with the tidyverse\n• loading datasets (csv\, excel\, SPSS\, google drive)\n• filtering\, sorting\, and reshaping data\n• grouping and summarizing data\n• combining datasets using joins\n• chaining commands together using ‘pipes’ \nTuesday 31st – Classes from 09:30 to 17:00\nData visualization with ggplot2\n• the ‘grammar of graphics’ philosophy\n• univariate plots: histograms\, density plots\, boxplots\, bar graphs\, violin and pirate plots\n• bivariate plots: scatterplots\, line graphs\, interaction plots\n• enhancing plots using labels and themes\n• creating subplots with faceting \nThe psychology stats ‘canon’ and the General Linear Model\n• t-tests\, confidence intervals\, effect size\, and power\n• correlation matrices and simple linear regression\n• contingency tables; chi-square tests\n• correlation and simple regression \nWednesday 1st – Classes from 09:30 to 17:00\nMultiple Regression\n• coding categorical predictors\n• detecting and dealing with multicollinearity\n• polynomial models for time-series data\n• model comparison and information criteria\n• model checking/validation\, plotting predictions \nThursday 2nd – Classes from 09:30 to 17:00\nAnalysis of Variance in the GLM framework\n• one-factor designs\n• multifactor designs: main effects and interactions\n• within-subject and mixed designs\n• checking assumptions (sphericity\, normality\, homogeneity of variance)\n• plotting and interpreting interactions\n• follow-up tests and contrasts \nGeneralized Linear Models\n• binary data (logistic regression)\n• count data (Poisson regression)\n• generating and plotting model predictions \nFriday 3rd – Classes from 09:30 to 16:00\nIntroduction to Linear Mixed-Effects Models\n• crossed random effects of participant and item\n• understanding variance components through data simulation\n• specifying the random effects structure\n• translating study design into lmer model syntax \n\n\n\n
URL:https://www.psstatistics.com/course/introduction-to-statistics-using-r-for-psychologists-ipsy03/
LOCATION:53 Morrison Street\, Glasgow\, Scotland\,\, G5 8LB\, United Kingdom
GEO:55.8535874;-4.267977
X-APPLE-STRUCTURED-LOCATION;VALUE=URI;X-ADDRESS=53 Morrison Street Glasgow Scotland G5 8LB United Kingdom;X-APPLE-RADIUS=500;X-TITLE=53 Morrison Street:geo:-4.267977,55.8535874
ATTACH;FMTTYPE=image/jpeg:https://www.psstatistics.com/wp-content/uploads/2017/05/11.jpg
END:VEVENT
END:VCALENDAR