omz:forum

    • Register
    • Login
    • Search
    • Recent
    • Popular

    Welcome!

    This is the community forum for my apps Pythonista and Editorial.

    For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.


    Matrix Multiplication/ matmul alternative

    Pythonista
    3
    8
    3768
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Valiarnt
      Valiarnt last edited by Valiarnt

      Hey, I just recently got pythonista, and love the convenience of it. I’m not a super experienced coder, and I think I’m looking for an alternative to numpy’s matmul, which does not seem to be present in the numpy native to the app. The ‘@‘ operator present in base Python does not accept ndarray datatypes. I think I might also be having another issue with my code, completely separate, but any piece of the puzzle is a step forward.

      The matrices I am multiplying are 2-D, so np.dot() almost worked, but it started giving me a completely different error. As I pass in multiple matrices, it passes one large 3-D matrix that no longer works for np.dot(). Something with my iterating must be off.

      mikael 1 Reply Last reply Reply Quote 0
      • mikael
        mikael @Valiarnt last edited by

        @Valiarnt, can you share some code?

        1 Reply Last reply Reply Quote 0
        • JonB
          JonB last edited by

          Make sure you are using python3 interpreter -- I think the @ operator was introduced in 3.5. are you getting a syjtax erro with @?

          In any case, @/matmul should be th same as .dot, so if none of these work, that suggests you have other problems, like incorrectly sized arrays. Check .shape -- if first matrix is (n,m), second should be (m, p).

          Are you using numpy arrays or matrix? You could also convert to matrix (.asmatrix()) then use regular *.

          It would be helpful if you copy/paste the code and error you are getting.

          1 Reply Last reply Reply Quote 0
          • JonB
            JonB last edited by

            Actually on second look, matmul was introduced in 1.10.0, but current pythonista has 1.8.0.

            https://github.com/omz/Pythonista-Issues/issues/533

            1 Reply Last reply Reply Quote 0
            • Valiarnt
              Valiarnt last edited by

              I have this project split into 2 files, this is the first.
              I am definitely getting matrices with the wrong sizes.

              import numpy as np

              class NeuralNetwork:

              def __init__(self, layer_sizes ):
              	#layer_sizes = (784,5,10)
              	weight_shapes = [(a,b) for a,b in zip(layer_sizes[1:],layer_sizes[:-1])]
              	print(weight_shapes) #prints (5,784),(10,5)
              	self.weights = [np.random.standard_normal(s)/s[0]**.5 for s in weight_shapes]
              	self.biases = [np.zeros((s,1))for s in layer_sizes[1:]]
              	
              def predict(self, a):
              	for w,b in zip(self.weights,self.biases):
              		g = np.array([])
              		for a in a:
              			a = self.activation(np.dot(w, a))+b
              			g = np.append(g,a)
              			print(" Data Set Processed")
              		print('Result of Activation Function')
              	return g
              		
              @staticmethod
              def activation(x):
              	return 1/(1+ np.exp(-x))
              
              1 Reply Last reply Reply Quote 0
              • Valiarnt
                Valiarnt last edited by

                This is the second file.
                The data file is located here, and just sits in the same folder.

                import NeuralNetwork as nn
                import numpy as np
                #Data collection
                with np.load('mnist.npz') as data:
                training_images = data['training_images']
                print(training_images.shape)
                training_labels = data['training_labels']
                print(training_labels.shape)

                layer_sizes = (784,5,10)

                net = nn.NeuralNetwork(layer_sizes)
                prediction = net.predict(training_images[:1])
                #changing this value to a single integer does not give any problems, passing in multiple values does however. I’m thinking I need to iterate on the outside of the function rather than on the inside.

                print('Prediction Shape: ')
                print(prediction.shape)

                1 Reply Last reply Reply Quote 0
                • JonB
                  JonB last edited by JonB

                  Is

                  for a in a:
                  a = self.activation(np.dot(w, a))+b

                  valid? I'd think you would want another variable name or two, to be unambiguous. Might work, just looks suspect, and depends on perhaps some arcane scoping rules if it does. You are using a as the input variable (a list of 784,1 arrays), then as a single iterator (a 784,1) then assigning a value to that same variable name... Use three different names!

                  Have you tried printing the shape of a? Is training_images an array of (784,1) ndarrays?

                  1 Reply Last reply Reply Quote 0
                  • JonB
                    JonB last edited by

                    Ok, the other problem.. it seems to me that you are doing a (5,784) dotted with a 784,1, for the first loop. Dot should broadcast on its own (if training data is (N,784,1), I think, though perhaps you might need tensordot.

                    But then second iteration, you try to multiply a 10,5 by the 784,1...

                    Instead you need to store the current result as a separate variable.

                    Maybe

                    def predict (self,a):
                       layer_output=a
                       for [w,b] in zip(self.weights,self.biases)
                          layer_output=self.activation(
                                                 np.dot(w,layer_output)+b)
                       return layer_output
                    

                    Which I think should give an out sized (N,10) if a is sized (N,784)

                    Or perhaps

                    def predict (self,a):
                       output=np.array([])
                       for data in a:
                          layer_output=data
                          for [w,b] in zip(self.weights,self.biases)
                             layer_output=self.activation(
                                                 np.dot(w,layer_output)+b)
                          output.append(layer_output)
                    

                    in case broadcasting doesn't work(but it should, and will be loads faster than a loop)

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Powered by NodeBB Forums | Contributors