19vues (au cours des 30derniers jours)
Afficher commentaires plus anciens
Jack Hunt le 20 Juin 2024 à 22:37
-
-
Lien
Utiliser le lien direct vers cette question
https://fr.mathworks.com/matlabcentral/answers/2130616-using-dlarray-with-betarnd-randg
Commenté: Jack Hunt il y a environ 8 heures
Réponse acceptée: Matt J
Ouvrir dans MATLAB Online
I am writing a custom layer with the DL toolbox and a part of the forward pass of this layer is making draws from a beta distribution where the b parameter is to be optimised as part of the network training. However, I seem to be having difficulty using betarnd (and by extension randg) with a dlarray valued parameter.
Consider the following, which works as expected.
>> betarnd(1, 0.1)
ans =
0.2678
However, if I instead do the following, then it does not work.
>> b = dlarray(0.1)
b =
1×1 dlarray
0.1000
>> betarnd(1, b)
Error using randg
SHAPE must be a full real double or single array.
Error in betarnd (line 34)
g2 = randg(b,sizeOut); % could be Infs or NaNs
Is it not possible to use such functions with parameters to be optimised via automatic differentiation (hence dlarray)?
Many thanks
0commentaires Afficher -2 commentaires plus anciensMasquer -2 commentaires plus anciens
Afficher -2 commentaires plus anciensMasquer -2 commentaires plus anciens
Connectez-vous pour commenter.
Connectez-vous pour répondre à cette question.
Réponse acceptée
Matt J le 20 Juin 2024 à 22:51
Modifié(e): Matt J le 20 Juin 2024 à 23:00
Random number generation operations do not have derivatives in the standard sense. You will have to define some approximate derivative for yourself by implementing a backward() method.
2commentaires Afficher AucuneMasquer Aucune
Afficher AucuneMasquer Aucune
Matt J le 20 Juin 2024 à 23:17
Utiliser le lien direct vers ce commentaire
https://fr.mathworks.com/matlabcentral/answers/2130616-using-dlarray-with-betarnd-randg#comment_3192086
Modifié(e): Matt J il y a environ 13 heures
Ouvrir dans MATLAB Online
You will have to define some approximate derivative for yourself by implementing a backward() method.
One candidate would be to reparametrize the beta distribution in terms of uniform random variables, U1 and U2, which you would save during forward propagation,
function [Z, U1, U2] = forward_pass(alpha, beta)
% Generate uniform random variables
U1 = rand();
U2 = rand();
% Generate Gamma(alpha, 1) and Gamma(beta, 1) using the inverse CDF (ppf)
X = gaminv(U1, alpha, 1);
Y = gaminv(U2, beta, 1);
% Combine to get Beta(alpha, beta)
Z = X / (X + Y);
end
During back propagation, your backward() method would differentiate non-stochastically with resepct to alpha and beta, using the saved U1 and U2 data as fixed and given values,
function [dZ_dalpha, dZ_dbeta] = backward_pass(alpha, beta, U1, U2, grad_gaminv)
% Differentiate gaminv with respect to the shape parameter alpha and beta
dX_dalpha = grad_gaminv(U1, alpha);
dY_dbeta = grad_gaminv(U2, beta);
% Compute partial derivatives of Z with respect to X and Y
X = gaminv(U1, alpha, 1);
Y = gaminv(U2, beta, 1);
dZ_dX = Y / (X + Y)^2;
dZ_dY = -X / (X + Y)^2;
% Use the chain rule to compute gradients with respect to alpha and beta
dZ_dalpha = dZ_dX * dX_dalpha;
dZ_dbeta = dZ_dY * dY_dbeta;
end
This assumes you have provided a function grad_gaminv() which can differentiate gaminv(), e.g.,
function grad = grad_gaminv(U, shape)
% Placeholder for the actual derivative computation of gaminv with respect to the shape parameter
% Here we use a numerical approximation for demonstration
delta = 1e-6;
grad = (gaminv(U, shape + delta, 1) - gaminv(U, shape, 1)) / delta;
end
DISCLAIMER: All code above was ChatGPT-generated.
Jack Hunt il y a environ 8 heures
Utiliser le lien direct vers ce commentaire
https://fr.mathworks.com/matlabcentral/answers/2130616-using-dlarray-with-betarnd-randg#comment_3193201
I see, so I do indeed need to use a closed form gradient. I had naively assumed that the autodiff engine would treat the stochastic (rng) quantities as non stochastic and basically do as you have described above.
Thank you for the answer. I shall work through the maths (re-derive the derivatives) and implement it the manual way. I have been spoiled by autodiff in the last decade or so; it’s been some time since I explicitly wrote a backward pass!
Connectez-vous pour commenter.
Plus de réponses (0)
Connectez-vous pour répondre à cette question.
Voir également
Catégories
AI, Data Science, and StatisticsStatistics and Machine Learning ToolboxClassificationClassification Ensembles
En savoir plus sur Classification Ensembles dans Help Center et File Exchange
Tags
- deep learning
- statistics
- matlab
- neural networks
- random number generator
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Une erreur s'est produite
Impossible de terminer l’action en raison de modifications de la page. Rechargez la page pour voir sa mise à jour.
Sélectionner un site web
Choisissez un site web pour accéder au contenu traduit dans votre langue (lorsqu'il est disponible) et voir les événements et les offres locales. D’après votre position, nous vous recommandons de sélectionner la région suivante : .
Vous pouvez également sélectionner un site web dans la liste suivante :
Amériques
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- Deutsch
- English
- Français
- United Kingdom(English)
Asie-Pacifique
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)
Contactez votre bureau local