From 548883447171533f9a17293eda1316267d0b3a90 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:17:48 +0000 Subject: [PATCH 01/12] documentation --- doxygen/10_UserManual.dox | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 3ff5693a14..8f1ef25b6a 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -884,16 +884,16 @@ Weight update model variables associated with the sparsely connected synaptic po \c g=[g_Pre0-Post1 g_pre0-post2 g_pre1-post0 X] - SynapseMatrixConnectivity::BITMASK is an alternative sparse matrix implementation where which synapses within the matrix are present is specified as a binary array (see \ref ex_mbody). This structure is somewhat less efficient than the ``SynapseMatrixConnectivity::SPARSE`` format and doesn't allow individual weights per synapse. However it does require the smallest amount of GPU memory for large networks. - SynapseMatrixConnectivity::PROCEDURAL is a new approach where, rather than being stored in memory, connectivity described using \ref sectSparseConnectivityInitialisation is generated 'on the fly' as spikes are processed (see \cite Knight2020 for more information). Therefore, this approach offers very large memory savings for a small performance cost but does not currently support plasticity. +- SynapseMatrixConnectivity::TOEPLITZ is another new approach tailored to \add_python_text{In Python\, SynapseMatrixConnectivity::SPARSE connectivity can be manually initialised from lists of pre and postsynaptic indices using the pygenn.SynapseGroup.set_sparse_connections method.} Furthermore the SynapseMatrixWeight defines how - SynapseMatrixWeight::INDIVIDUAL allows each individual synapse to have unique weight update model variables. -Their values must be initialised at runtime and, if running on the GPU, copied across from the user side code, using the \c pushXXXXXStateToDevice function, where XXXX is the name of the synapse population. - SynapseMatrixWeight::INDIVIDUAL_PSM allows each postsynapic neuron to have unique post synaptic model variables. -Their values must be initialised at runtime and, if running on the GPU, copied across from the user side code, using the \c pushXXXXXStateToDevice function, where XXXX is the name of the synapse population. - SynapseMatrixWeight::GLOBAL saves memory by only maintaining one copy of the weight update model variables. This is automatically initialized to the initial value passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}. - SynapseMatrixWeight::PROCEDURAL generates weight update model variable values described using \ref sectVariableInitialisation 'on the fly' as spikes are processed. This is typically used alongside SynapseMatrixConnectivity::PROCEDURAL for large models with static connectivity and weights/delays sampled from probability distributions (see \cite Knight2020 for an example). +- SynapseMatrixWeight::KERNEL saves memory by sharing a small kernel of weights between all synapses. The size of the kernel is defined by the sparse connectivity initialisation snippet (see \ref sectSparseConnectivityInitialisation) or Toeplitz connectivity initialisation snippet (see TODO). Only certain combinations of SynapseMatrixConnectivity and SynapseMatrixWeight are sensible therefore, to reduce confusion, the SynapseMatrixType enumeration defines the following options which can be passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}: - SynapseMatrixType::SPARSE_GLOBALG @@ -906,6 +906,8 @@ Only certain combinations of SynapseMatrixConnectivity and SynapseMatrixWeight a - SynapseMatrixType::BITMASK_GLOBALG_INDIVIDUAL_PSM - SynapseMatrixType::PROCEDURAL_GLOBALG - SynapseMatrixType::PROCEDURAL_PROCEDURALG +- SynapseMatrixType::PROCEDURAL_KERNELG +- SynapseMatrixType::TOEPLITZ_KERNELG \add_toggle_python In Python, these matrix types can be selected by their unqualified name e.g. "DENSE_INDIVIDUALG". From 1c711a18d29d28a7bf3d0b067d4efc73b110066c Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:17:54 +0000 Subject: [PATCH 02/12] started release notes --- doxygen/09_ReleaseNotes.dox | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 6f2a399583..87d46765bc 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -1,5 +1,24 @@ /*! \page ReleaseNotes Release Notes +Release Notes for GeNN v4.7.0 +==== +This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release. + +User Side Changes +---- +1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ connectivity provides a more efficient solution with some typical connectivity patterns implemented in InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D (see TODO and \ref subsect34) +2. Weight kernels previously had to be provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (see \ref subsect34). +3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. This input can now be applied using the \$(addToPre,...) function in presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (see \ref sect34) +4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the cachine system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated. +5. Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (see \ref neuron_rng and \ref sectVariableInitialisation) +6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths. + +Bug fixes: +---- +1. Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity. +2. Fixed issue where, if \$(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created. +3. Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated. + Release Notes for GeNN v4.6.0 ==== This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN. From 7f55dea1095ccabbad016fb654988501014ee117 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:18:02 +0000 Subject: [PATCH 03/12] bumped version --- version.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version.txt b/version.txt index 6016e8addc..f6cdf40983 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -4.6.0 +4.7.0 From f683eaceee032e84cdb3f84bce7ee16f33255084 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:27:52 +0000 Subject: [PATCH 04/12] Thomas! --- makedoc.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/makedoc.sh b/makedoc.sh index 98af7dee14..a638f57e2b 100755 --- a/makedoc.sh +++ b/makedoc.sh @@ -1,3 +1,3 @@ #! /bin/bash -export GENN_PATH=/its/home/tn41/localdisk_projects/develop/genn +export GENN_PATH=$(dirname $(realpath "$0")) (cat doxygen/genn-doxygen.conf ; echo "PROJECT_NUMBER=`cat version.txt`") | doxygen - From b51cd629b8494c2fdd0614644ae61db9fd42cc28 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:40:14 +0000 Subject: [PATCH 05/12] fixed typo --- include/genn/genn/initVarSnippet.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/genn/genn/initVarSnippet.h b/include/genn/genn/initVarSnippet.h index 1506d6ac6d..8d3e694bcf 100644 --- a/include/genn/genn/initVarSnippet.h +++ b/include/genn/genn/initVarSnippet.h @@ -198,7 +198,7 @@ class Exponential : public Base // InitVarSnippet::Gamma //---------------------------------------------------------------------------- //! Initialises variable by sampling from the gamma distribution -/*! This snippet takes s parameters: +/*! This snippet takes 2 parameters: * - \c a - distribution shape - \c b - distribution scale*/ From d4fd6e995240c91b97c68b92cae70640b7066548 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:40:57 +0000 Subject: [PATCH 06/12] documented binomial --- doxygen/10_UserManual.dox | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 8f1ef25b6a..021b43c62e 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -434,6 +434,7 @@ Many neuron models have probabilistic terms, for example a source of noise or a - \$(gennrand_exponential) returns a number drawn from an exponential distribution with \f$\lambda=1\f$. - \$(gennrand_log_normal, MEAN, STDDEV) returns a number drawn from a log-normal distribution with the specified mean and standard deviation. - \$(gennrand_gamma, ALPHA) returns a number drawn from a gamma distribution with the specified shape. +- \$(gennrand_binomial, N, P) returns a number drawn from a binomial distribution with the specified shape. \section neuron_pop_own_type Creating neuron populations with your own neuron type Once defined in this way, new neuron models classes, can be used in network descriptions by referring to their type e.g. @@ -956,6 +957,7 @@ or initialised using one of a number of predefined _variable initialisation snip - InitVarSnippet::NormalClippedDelay - InitVarSnippet::Exponential - InitVarSnippet::Gamma +- InitVarSnippet::Binomial \add_toggle_python In Python, these models can be selected by their unqualified name e.g. "Normal". From 334048cb1de47f12ff3a4cf571ea6cc88ea7c538 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 10:44:02 +0000 Subject: [PATCH 07/12] pretarget --- doxygen/10_UserManual.dox | 1 + 1 file changed, 1 insertion(+) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 021b43c62e..894dac63f4 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -164,6 +164,7 @@ When using a sparse connectivity initialisation snippet, these values are set au time step `DT`) allowed for synapses in this population. No values larger than this should be passed to the delay parameter of the `addToDenDelay` function in user code (see \ref sect34). - SynapseGroup::setSpanType() sets how incoming spike processing is parallelised for this synapse group. The default SynapseGroup::SpanType::POSTSYNAPTIC is nearly always the best option, but SynapseGroup::SpanType::PRESYNAPTIC may perform better when there are large numbers of spikes every timestep or very few postsynaptic neurons.} - SynapseGroup::setPSTargetVar() sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). +- SynapseGroup::setPreTargetVar() sets the additional input variable (or standard "Isyn") on the presynaptic neuron population where input provided from this synapse group via \$(addToPre,...) is routed (see section \ref neuron_additional_input and \ref sect34). \end_toggle \add_toggle_python The pygenn.GeNNModel.add_synapse_population function returns a pygenn.SynapseGroup object which can be further configured, namely with: From ea27be4cd2e16ad64b45da0ae5dac876f1d19b10 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 14:06:01 +0000 Subject: [PATCH 08/12] Toeplitz stuff --- doxygen/09_ReleaseNotes.dox | 8 +-- doxygen/10_UserManual.dox | 121 ++++++++++++++++++++++++++++++++++-- 2 files changed, 119 insertions(+), 10 deletions(-) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 87d46765bc..882f693613 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -6,10 +6,10 @@ This release adds a number of significant new features to GeNN as well as includ User Side Changes ---- -1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ connectivity provides a more efficient solution with some typical connectivity patterns implemented in InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D (see TODO and \ref subsect34) -2. Weight kernels previously had to be provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (see \ref subsect34). -3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. This input can now be applied using the \$(addToPre,...) function in presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (see \ref sect34) -4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the cachine system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated. +1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (see \ref sectToeplitzConnectivityInitialisation) +2. Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (see \ref subsect34). +3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the \$(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (see \ref sect34) +4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the cachine system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL. 5. Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (see \ref neuron_rng and \ref sectVariableInitialisation) 6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths. diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 894dac63f4..55a3a7a5a5 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -31,6 +31,7 @@ - \subpage sectVariableInitialisation - \subpage sectVariableReferences - \subpage sectSparseConnectivityInitialisation +- \subpage sectToeplitzConnectivityInitialisation \section sIntro Introduction @@ -886,18 +887,18 @@ Weight update model variables associated with the sparsely connected synaptic po \c g=[g_Pre0-Post1 g_pre0-post2 g_pre1-post0 X] - SynapseMatrixConnectivity::BITMASK is an alternative sparse matrix implementation where which synapses within the matrix are present is specified as a binary array (see \ref ex_mbody). This structure is somewhat less efficient than the ``SynapseMatrixConnectivity::SPARSE`` format and doesn't allow individual weights per synapse. However it does require the smallest amount of GPU memory for large networks. - SynapseMatrixConnectivity::PROCEDURAL is a new approach where, rather than being stored in memory, connectivity described using \ref sectSparseConnectivityInitialisation is generated 'on the fly' as spikes are processed (see \cite Knight2020 for more information). Therefore, this approach offers very large memory savings for a small performance cost but does not currently support plasticity. -- SynapseMatrixConnectivity::TOEPLITZ is another new approach tailored to +- SynapseMatrixConnectivity::TOEPLITZ is another new approach where, kernel-based connectivity like convolutions described using \ref sectToeplitzConnectivityInitialisation is generated 'on the fly' as spikes are processed. As well as offering the large memory savings of ``SynapseMatrixConnectivity::PROCEDURAL``, if the batch size is large enough it also out-performs both ``SynapseMatrixConnectivity::SPARSE`` and ``SynapseMatrixConnectivity::PROCEDURAL``. \add_python_text{In Python\, SynapseMatrixConnectivity::SPARSE connectivity can be manually initialised from lists of pre and postsynaptic indices using the pygenn.SynapseGroup.set_sparse_connections method.} -Furthermore the SynapseMatrixWeight defines how +Furthermore the `SynapseMatrixWeight` defines how - SynapseMatrixWeight::INDIVIDUAL allows each individual synapse to have unique weight update model variables. - SynapseMatrixWeight::INDIVIDUAL_PSM allows each postsynapic neuron to have unique post synaptic model variables. - SynapseMatrixWeight::GLOBAL saves memory by only maintaining one copy of the weight update model variables. This is automatically initialized to the initial value passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}. - SynapseMatrixWeight::PROCEDURAL generates weight update model variable values described using \ref sectVariableInitialisation 'on the fly' as spikes are processed. This is typically used alongside SynapseMatrixConnectivity::PROCEDURAL for large models with static connectivity and weights/delays sampled from probability distributions (see \cite Knight2020 for an example). -- SynapseMatrixWeight::KERNEL saves memory by sharing a small kernel of weights between all synapses. The size of the kernel is defined by the sparse connectivity initialisation snippet (see \ref sectSparseConnectivityInitialisation) or Toeplitz connectivity initialisation snippet (see TODO). +- SynapseMatrixWeight::KERNEL saves memory by sharing a small kernel of weights between all synapses. The size of the kernel is defined by the sparse connectivity initialisation snippet (see \ref sectSparseConnectivityInitialisation) or Toeplitz connectivity initialisation snippet (see \ref sectToeplitzConnectivityInitialisation). -Only certain combinations of SynapseMatrixConnectivity and SynapseMatrixWeight are sensible therefore, to reduce confusion, the SynapseMatrixType enumeration defines the following options which can be passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}: +Only certain combinations of `SynapseMatrixConnectivity` and `SynapseMatrixWeight` are sensible therefore, to reduce confusion, the `SynapseMatrixType` enumeration defines the following options which can be passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}: - SynapseMatrixType::SPARSE_GLOBALG - SynapseMatrixType::SPARSE_GLOBALG_INDIVIDUAL_PSM - SynapseMatrixType::SPARSE_INDIVIDUALG @@ -1220,12 +1221,120 @@ calc_kernel_size_func=create_cksf_class( \end_toggle_code Secondly, the row build code must use an extended version of the `addSynapse` function which indicates the kernel indices associated with the synapse. For example, the following could be used with a 4 dimensional kernel: \$(addSynapse, idPost, kernRow, kernCol, inChan, outChan); -Finally, in order to use a kernel to initialise SynapseMatrixWeight::INDIVIDUAL variables or generate SynapseMatrixWeight::PROCEDURAL variables on the fly, the variables should be initialised using the InitVarSnippet::Kernel variable initialisation snippet and the kernel itself allocated as an extra global parameter and pushed to device (see \ref extraGlobalParamSim). +In order to use a kernel to initialise SynapseMatrixWeight::INDIVIDUAL variables, the variables should be initialised using the InitVarSnippet::Kernel variable initialisation snippet and the kernel itself allocated as an extra global parameter and pushed to device (see \ref extraGlobalParamSim). +Finally, in order to use a kernel to generate SynapseMatrixWeight::PROCEDURAL variables on the fly, the SynapseMatrixType::PROCEDURAL_KERNELG matrix type should be used. \section sect_sparse_connect_init_modes Sparse connectivity locations Once you have defined how sparse connectivity is going to be initialised, similarly to variables, you can control where it is allocated. This is controlled using the same ``VarLocations`` options described in section \ref sect_var_init_modes and can either be set using the model default specifiued with ``ModelSpec::setDefaultSparseConnectivityLocation`` or on a per-synapse group basis using ``SynapseGroup::setSparseConnectivityLocation``. ----- -\link sectVariableReferences Previous\endlink | \link UserManual Top\endlink | \link Tutorial1 Next\endlink +\link sectVariableReferences Previous\endlink | \link UserManual Top\endlink | \link sectToeplitzConnectivityInitialisation Next\endlink +*/ +//---------------------------------------------------------------------------- +/*! +\page sectToeplitzConnectivityInitialisation Toeplitz connectivity initialisation + +Synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised. + +This can be done using one of a number of predefined _Toeplitz connectivity initialisation snippets_: +- InitToeplitzConnectivitySnippet::Conv2D +- InitToeplitzConnectivitySnippet::AvgPoolConv2D + +\add_toggle_python +In Python, these models can be selected by their unqualified name e.g. "Conv2D". +\end_toggle +For example, to initialise convolutional synaptic connectivity with a 3x3 kernel between two populations of 1024 neurons representing a single 32x32 channel: +\add_toggle_code_cpp +InitTopelitzConnectivitySnippet::Conv2D::ParamValues conv( + 3, 3, // conv_kh, conv_kw + 32, 32, 1, // conv_ih, conv_iw, conv_ic + 32, 32, 1); // conv_oh, conv_ow, conv_oc + +model.addSynapsePopulation<...>( + ... + initToeplitzConnectivity(conv)); +\end_toggle_code +\add_toggle_code_python +conv = {"conv_kh": 3, "conv_kw": 3, + "conv_ih": 32, "conv_iw": 32, "conv_ic": 1, + "conv_oh": 32, "conv_ow": 32, "conv_oc": 2} +model.add_synapse_population( + ... + genn_model.init_toeplitz_connectivity("Conv2D", conv)) +\end_toggle_code + +\section sect_new_toeplitz_connect Defining a new Toeplitz connectivity snippet +Similarly to sparse connectivity initialisation snippets, Toeplitz connectivity initialisation snippets can be created by simply defining a class in the model description. + +For example, the following Toeplitz connectivity initialisation snippet could be used to convolve a square kernel with the output of populations of ``kern_size x kern_size`` neurons. +\add_toggle_code_cpp +class SimpleConv2D : public InitToeplitzConnectivitySnippet::Base +{ +public: + DECLARE_SNIPPET(SimpleConv2D, 2); + + SET_PARAM_NAMES({"kern_size", "pop_size"}); + + SET_DIAGONAL_BUILD_STATE_VARS({{"kernRow", "int", "$(id_diag) / (int)$(kern_size)"}, + {"kernCol", "int", "$(id_diag) % (int)$(kern_size)"}}); + + SET_DIAGONAL_BUILD_CODE( + "const int preRow = $(id_pre) / (int)$(pop_size);\n" + "const int preCol = $(id_pre) / % (int)$(pop_size);\n" + "// If we haven't gone off edge of output\n" + "const int postRow = preRow + $(kernRow) - 1;\n" + "const int postCol = preCol + $(kernCol) - 1;\n" + "if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_size) && postCol < (int)$(pop_size)) {\n" + " // Calculate postsynaptic index\n" + " const int postInd = ((postRow * (int)$(pop_size)) + postCol;\n" + " $(addSynapse, postInd, $(kernRow), $(kernCol));\n" + "}\n"); + + SET_CALC_MAX_ROW_LENGTH_FUNC( + [](unsigned int, unsigned int, const std::vector &pars) + { + return ((unsigned int)pars[0] * (unsigned int)pars[0]); + }); + + SET_CALC_KERNEL_SIZE_FUNC( + [](const std::vector &pars)->std::vector + { + return {(unsigned int)pars[0], (unsigned int)pars[0]}; + }); +}; +IMPLEMENT_SNIPPET(Ring); +\end_toggle_code +\add_toggle_code_python +simple_conv2d_model = genn_model.create_custom_toeplitz_connect_init_snippet_class( + "simple_conv2d", + param_names=["kern_size", "pop_size"], + diagonal_build_state_vars=[ + ("kernRow", "int", "$(id_diag) / (int)$(kern_size)"), + ("kernCol", "int", "$(id_diag) % (int)$(kern_size)")], + diagonal_build_code= + """ + const int preRow = $(id_pre) / (int)$(pop_size); + const int preCol = $(id_pre) / % (int)$(pop_size); + // If we haven't gone off edge of output + const int postRow = preRow + $(kernRow) - 1; + const int postCol = preCol + $(kernCol) - 1; + if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_size) && postCol < (int)$(pop_size)) { + // Calculate postsynaptic index + const int postInd = (postRow * (int)$(pop_size)) + postCol; + $(addSynapse, postInd, $(kernRow), $(kernCol)); + } + """, + + calc_max_row_len_func=genn_model.create_cmlf_class( + lambda num_pre, num_post, pars: int(pars[0]) * int(pars[0]))(), + calc_kernel_size_func=genn_model.create_cmlf_class( + lambda pars: UnsignedIntVector([int(pars[0]), int(pars[0])]))()) +\end_toggle_code +Each diagonal of Toeplitz connectivity is initialised independantly by running the snippet of code specified using the \add_cpp_python_text{`SET_DIAGONAL_BUILD_CODE()` macro,`diagonal_build_code` keyword argument} within a loop. +The \$(id_diag) variable can be used to access the index of the diagonal being generated and the \$(id_pre) variable can be used to access the index of the presynaptic neuron currently being processed. +The \add_cpp_python_text{`SET_DIAGONAL_BUILD_STATE_VARS()` macro,`diagonal_build_state_vars` keyword argument} can be used to initialise state variables outside of the loop - in this case \$(id_diag) is split into ``kernRow`` and ``kernCol`` which are subsequently used to access the kernel element repeated along each diagonal. Similarly to sparse connectivity initialisation snippets when a kernel is used, synapses are added to using the \$(addSynapse, target, kernRow, kernCol) function where the size of the kernel is specified using the \add_cpp_python_text{SET_CALC_KERNEL_SIZE_FUNC() macro, `calc_kernel_size_func` keyword argument`}. + +----- +\link sectSparseConnectivityInitialisation Previous\endlink | \link UserManual Top\endlink | \link Tutorial1 Next\endlink */ From 40b573098c2abd50ed46f01193ca8d9fb0a5772d Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 14:24:02 +0000 Subject: [PATCH 09/12] tweaks --- doxygen/10_UserManual.dox | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 55a3a7a5a5..aa5e31ec72 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -1105,9 +1105,9 @@ Tranposing is currently only possible on variables belonging to synapse groups w /*! \page sectSparseConnectivityInitialisation Sparse connectivity initialisation -Synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised. +Using _sparse connectivity initialisation snippets_, synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised and SynapseMatrixConnectivity::PROCEDURAL connectivity cna be generated on the fly. -This can be done using one of a number of predefined _sparse connectivity initialisation snippets_: +There are a number of predefined sparse connectivity initialisation snippets: - InitSparseConnectivitySnippet::OneToOne - InitSparseConnectivitySnippet::FixedProbability - InitSparseConnectivitySnippet::FixedProbabilityNoAutapse @@ -1235,9 +1235,11 @@ This is controlled using the same ``VarLocations`` options described in section /*! \page sectToeplitzConnectivityInitialisation Toeplitz connectivity initialisation -Synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised. +_Toeplitz matrices_ are ones where the values along all diagonals are constant. +_Doubly-blocked Toeplitz matrices_ are matrices made out of Toeplitz sub-matrices in a structure that is, itself Toeplitz so the sub-matrices are repeated along diagonals. +Doubly-blocked Toeplitz matrices can be used to represent common machine learning operations including convolutions and, in GeNN, _Toeplitz connectivity initialisation snippets_ can be used to generate such connectivity on the fly with SynapseMatrixType::TOEPLITZ_KERNELG (see \ref subsect34). -This can be done using one of a number of predefined _Toeplitz connectivity initialisation snippets_: +There are a number of predefined Toeplitz connectivity initialisation snippets: - InitToeplitzConnectivitySnippet::Conv2D - InitToeplitzConnectivitySnippet::AvgPoolConv2D @@ -1303,7 +1305,7 @@ public: return {(unsigned int)pars[0], (unsigned int)pars[0]}; }); }; -IMPLEMENT_SNIPPET(Ring); +IMPLEMENT_SNIPPET(SimpleConv2D); \end_toggle_code \add_toggle_code_python simple_conv2d_model = genn_model.create_custom_toeplitz_connect_init_snippet_class( From 558d5310614a49a2862da3031db4fc12f025dbfd Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 15:33:07 +0000 Subject: [PATCH 10/12] tried to add some more detail about the nuance of VarAccess and VarAccessMode for custom updates --- doxygen/10_UserManual.dox | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index aa5e31ec72..f91bdfdc6e 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -800,7 +800,8 @@ For convenience the methods this class should implement can be implemented using - \add_cpp_python_text{SET_DERIVED_PARAMS()\, SET_PARAM_NAMES()\, SET_VARS(), ``derived_params``\, ``param_names``\, ``var_name_types``} perform the same roles as they do in the neuron models discussed in \ref sect_own. - \add_cpp_python_text{DECLARE_CUSTOM_UPDATE_MODEL(TYPE\, NUM_PARAMS\, NUM_VARS\, NUM_VAR_REFS) is an extended version of ``DECLARE_MODEL()`` which declares the boilerplate code required for a custom update with variable references as well as variables and parameters,`class_name`: the name of the new model}. - \add_cpp_python_text{SET_VAR_REFS(),`var_refs`} defines the names, type strings (e.g. "float", "double", etc) and (optionally) access mode - of the varaible references. The variables defined here as `NAME` can then be used in the syntax \$(NAME) in the update code string. Variable reference types must match those of the underlying variables. + of the variable references. The variables defined here as `NAME` can then be used in the syntax \$(NAME) in the update code string. Variable reference types must match those of the underlying variables. + supported access modes are \add_cpp_python_text{VarAccessMode::READ_WRITE, pygenn.genn_wrapper.Models.VarAccessMode_READ_WRITE} \add_cpp_python_text{VarAccessMode::READ_ONLY,pygenn.genn_wrapper.Models.VarAccessMode_READ_ONLY}, \add_cpp_python_text{VarAccessMode::REDUCE_SUM, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_SUM} and \add_cpp_python_text{VarAccessMode::REDUCE_MAX, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_MAX}. - \add_cpp_python_text{SET_UPDATE_CODE(UPDATE_CODE),``update_code=UPDATE_CODE``}: where UPDATE_CODE contains the code for to perform the custom update. For example, using these \add_cpp_python_text{macros,keyword arguments}, we can define a custom update which will set a referenced variable to the value of a custom update model state variable: @@ -813,7 +814,7 @@ public: SET_UPDATE_CODE("$(r) = $(v);"); SET_VARS({{"v", "scalar", VarAccess::READ_ONLY}}); - SET_VAR_REFS({{"r", "scalar"}, + SET_VAR_REFS({{"r", "scalar", VarAccessMode::READ_WRITE}, }; \end_toggle_code \add_toggle_code_python @@ -821,10 +822,12 @@ reset_model = genn_model.create_custom_custom_update_class( "reset", var_name_types=[("v", "scalar", VarAccess_READ_ONLY)], - var_refs=[("r", "scalar")], + var_refs=[("r", "scalar", VarAccessMode_READ_WRITE)], update_code="$(r) = $(v);") \end_toggle_code +When used in a model with batch size > 1, whether custom updates of this sort are batched or not depends on the variables their references point to. If any referenced variables have \add_cpp_python_text{VarAccess::READ_ONLY_DUPLICATE, pygenn.genn_wrapper.Models.VarAccess_READ_ONLY_DUPLICATE} or \add_cpp_python_text{VarAccess::READ_WRITE, pygenn.genn_wrapper.Models.VarAccess_READ_WRITE} access modes, then the update will be batched and any variables associated with the custom update which also have \add_cpp_python_text{VarAccess::READ_ONLY_DUPLICATE, pygenn.genn_wrapper.Models.VarAccess_READ_ONLY_DUPLICATE} or \add_cpp_python_text{VarAccess::READ_WRITE, pygenn.genn_wrapper.Models.VarAccess_READ_WRITE} access modes will be duplicated across the batches. + \subsection custom_update_reduction Batch reduction As well as the standard variable access modes described in \ref subsect11, custom updates support variables with several 'reduction' access modes: - \add_cpp_python_text{VarAccess::REDUCE_BATCH_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_BATCH_SUM``} @@ -844,19 +847,24 @@ public: "$(gradient) = 0;\n"); SET_VARS({{"reducedGradient", "scalar", VarAccess::REDUCE_BATCH_SUM}}); - SET_VAR_REFS({{"gradient", "scalar"}, + SET_VAR_REFS({{"gradient", "scalar", VarAccessMode::READ_ONLY}, }; \end_toggle_code \add_toggle_code_python gradient_batch_reduce_model = genn_model.create_custom_custom_update_class( "gradient_batch_reduce", var_name_types=[("reducedGradient", "scalar", VarAccess_REDUCE_BATCH_SUM)], - var_refs=[("gradient", "scalar")], + var_refs=[("gradient", "scalar", VarAccessMode_READ_ONLY)], update_code=""" $(reducedGradient) = $(gradient); $(gradient) = 0; """) \end_toggle_code + +Custom updates can also perform the same sort of reduction operation _into_ variable references with the equivalent 'reduction' access modes: +- \add_cpp_python_text{VarAccessMode::REDUCE_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_SUM``} +- \add_cpp_python_text{VarAccessMode::REDUCE_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_MAX``} + \note Reading from variables with a reduction access mode is undefined behaviour. From c033d86aa9f530ca8cf34ee7dffce1f096b5c57f Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 16:34:41 +0000 Subject: [PATCH 11/12] thomas fixes --- doxygen/09_ReleaseNotes.dox | 2 +- doxygen/10_UserManual.dox | 37 +++++++++++++++++++------------------ 2 files changed, 20 insertions(+), 19 deletions(-) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 882f693613..b752ce868d 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -9,7 +9,7 @@ User Side Changes 1. While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (see \ref sectToeplitzConnectivityInitialisation) 2. Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (see \ref subsect34). 3. Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the \$(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (see \ref sect34) -4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the cachine system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL. +4. On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL. 5. Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (see \ref neuron_rng and \ref sectVariableInitialisation) 6. In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths. diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index f91bdfdc6e..2e2294aa86 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -1113,7 +1113,7 @@ Tranposing is currently only possible on variables belonging to synapse groups w /*! \page sectSparseConnectivityInitialisation Sparse connectivity initialisation -Using _sparse connectivity initialisation snippets_, synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised and SynapseMatrixConnectivity::PROCEDURAL connectivity cna be generated on the fly. +Using _sparse connectivity initialisation snippets_, synaptic connectivity implemented using SynapseMatrixConnectivity::SPARSE and SynapseMatrixConnectivity::BITMASK can be automatically initialised and SynapseMatrixConnectivity::PROCEDURAL connectivity can be generated on the fly. There are a number of predefined sparse connectivity initialisation snippets: - InitSparseConnectivitySnippet::OneToOne @@ -1268,36 +1268,37 @@ model.addSynapsePopulation<...>( \add_toggle_code_python conv = {"conv_kh": 3, "conv_kw": 3, "conv_ih": 32, "conv_iw": 32, "conv_ic": 1, - "conv_oh": 32, "conv_ow": 32, "conv_oc": 2} + "conv_oh": 32, "conv_ow": 32, "conv_oc": 1} model.add_synapse_population( ... genn_model.init_toeplitz_connectivity("Conv2D", conv)) \end_toggle_code +Here, `conv_kh` denotes the kernel height, `conv_kw` the kernel width, `conv_ih` the input layer height, `conv_iw`, the input layer width, `conv_ic` the number of input channels, `conv_oh` the output layer height, `conv_ow` the output layer width, and `conv_oc` the number of output channels/filters. \section sect_new_toeplitz_connect Defining a new Toeplitz connectivity snippet Similarly to sparse connectivity initialisation snippets, Toeplitz connectivity initialisation snippets can be created by simply defining a class in the model description. -For example, the following Toeplitz connectivity initialisation snippet could be used to convolve a square kernel with the output of populations of ``kern_size x kern_size`` neurons. +For example, the following Toeplitz connectivity initialisation snippet could be used to convolve a `kern_dim x kern_dim` square kernel with the output of populations of `pop_dim x pop_dim` neurons. \add_toggle_code_cpp class SimpleConv2D : public InitToeplitzConnectivitySnippet::Base { public: DECLARE_SNIPPET(SimpleConv2D, 2); - SET_PARAM_NAMES({"kern_size", "pop_size"}); + SET_PARAM_NAMES({"kern_dim", "pop_dim"}); - SET_DIAGONAL_BUILD_STATE_VARS({{"kernRow", "int", "$(id_diag) / (int)$(kern_size)"}, - {"kernCol", "int", "$(id_diag) % (int)$(kern_size)"}}); + SET_DIAGONAL_BUILD_STATE_VARS({{"kernRow", "int", "$(id_diag) / (int)$(kern_dim)"}, + {"kernCol", "int", "$(id_diag) % (int)$(kern_dim)"}}); SET_DIAGONAL_BUILD_CODE( - "const int preRow = $(id_pre) / (int)$(pop_size);\n" - "const int preCol = $(id_pre) / % (int)$(pop_size);\n" + "const int preRow = $(id_pre) / (int)$(pop_dim);\n" + "const int preCol = $(id_pre) % (int)$(pop_dim);\n" "// If we haven't gone off edge of output\n" "const int postRow = preRow + $(kernRow) - 1;\n" "const int postCol = preCol + $(kernCol) - 1;\n" - "if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_size) && postCol < (int)$(pop_size)) {\n" + "if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_dim) && postCol < (int)$(pop_dim)) {\n" " // Calculate postsynaptic index\n" - " const int postInd = ((postRow * (int)$(pop_size)) + postCol;\n" + " const int postInd = ((postRow * (int)$(pop_dim)) + postCol;\n" " $(addSynapse, postInd, $(kernRow), $(kernCol));\n" "}\n"); @@ -1318,20 +1319,20 @@ IMPLEMENT_SNIPPET(SimpleConv2D); \add_toggle_code_python simple_conv2d_model = genn_model.create_custom_toeplitz_connect_init_snippet_class( "simple_conv2d", - param_names=["kern_size", "pop_size"], + param_names=["kern_size", "pop_dim"], diagonal_build_state_vars=[ - ("kernRow", "int", "$(id_diag) / (int)$(kern_size)"), - ("kernCol", "int", "$(id_diag) % (int)$(kern_size)")], + ("kernRow", "int", "$(id_diag) / (int)$(kern_dim)"), + ("kernCol", "int", "$(id_diag) % (int)$(kern_dim)")], diagonal_build_code= """ - const int preRow = $(id_pre) / (int)$(pop_size); - const int preCol = $(id_pre) / % (int)$(pop_size); + const int preRow = $(id_pre) / (int)$(pop_dim); + const int preCol = $(id_pre) % (int)$(pop_dim); // If we haven't gone off edge of output const int postRow = preRow + $(kernRow) - 1; const int postCol = preCol + $(kernCol) - 1; - if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_size) && postCol < (int)$(pop_size)) { + if(postRow >= 0 && postCol >= 0 && postRow < (int)$(pop_dim) && postCol < (int)$(pop_dim)) { // Calculate postsynaptic index - const int postInd = (postRow * (int)$(pop_size)) + postCol; + const int postInd = (postRow * (int)$(pop_dim)) + postCol; $(addSynapse, postInd, $(kernRow), $(kernCol)); } """, @@ -1341,7 +1342,7 @@ simple_conv2d_model = genn_model.create_custom_toeplitz_connect_init_snippet_cla calc_kernel_size_func=genn_model.create_cmlf_class( lambda pars: UnsignedIntVector([int(pars[0]), int(pars[0])]))()) \end_toggle_code -Each diagonal of Toeplitz connectivity is initialised independantly by running the snippet of code specified using the \add_cpp_python_text{`SET_DIAGONAL_BUILD_CODE()` macro,`diagonal_build_code` keyword argument} within a loop. +Each diagonal of Toeplitz connectivity is initialised independently by running the snippet of code specified using the \add_cpp_python_text{`SET_DIAGONAL_BUILD_CODE()` macro,`diagonal_build_code` keyword argument} within a loop. The \$(id_diag) variable can be used to access the index of the diagonal being generated and the \$(id_pre) variable can be used to access the index of the presynaptic neuron currently being processed. The \add_cpp_python_text{`SET_DIAGONAL_BUILD_STATE_VARS()` macro,`diagonal_build_state_vars` keyword argument} can be used to initialise state variables outside of the loop - in this case \$(id_diag) is split into ``kernRow`` and ``kernCol`` which are subsequently used to access the kernel element repeated along each diagonal. Similarly to sparse connectivity initialisation snippets when a kernel is used, synapses are added to using the \$(addSynapse, target, kernRow, kernCol) function where the size of the kernel is specified using the \add_cpp_python_text{SET_CALC_KERNEL_SIZE_FUNC() macro, `calc_kernel_size_func` keyword argument`}. From 5c89509faf0a95c9794f72bd07f4c71cc2f73fb1 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Fri, 11 Feb 2022 16:47:30 +0000 Subject: [PATCH 12/12] tweak some more --- doxygen/10_UserManual.dox | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 2e2294aa86..41b32b88b5 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -1254,7 +1254,7 @@ There are a number of predefined Toeplitz connectivity initialisation snippets: \add_toggle_python In Python, these models can be selected by their unqualified name e.g. "Conv2D". \end_toggle -For example, to initialise convolutional synaptic connectivity with a 3x3 kernel between two populations of 1024 neurons representing a single 32x32 channel: +For example, to initialise convolutional synaptic connectivity with a 3x3 kernel between two populations of 1024 neurons, each representing a 32x32 layer with a single channel: \add_toggle_code_cpp InitTopelitzConnectivitySnippet::Conv2D::ParamValues conv( 3, 3, // conv_kh, conv_kw