To be more precise, the question should be "Is Point3D on BRepFace?" :)
There is no direct function to determine that but you can get to the answer by using other functions like the isParameterOnFace function of SurfaceEvaluator.
In order to use that though we need to first get the parameters based on the point. Turns out the function we need to use, getParameterAtPoint, actually works like this: "If the point does not lie on the surface, the parameter of the nearest point on the surface will generally be returned." So we have to check for that too.
Here is the Python code we can use:
app = adsk.core.Application.get()
ui = app.userInterface
selections = app.userInterface.activeSelections
sketchPoint = selections.item(0).entity
face = selections.item(1).entity
evaluator = face.evaluator
point = sketchPoint.worldGeometry
"point, x=" + str(point.x) +
"; y=" + str(point.y) +
"; z=" + str(point.z))
(returnValue, parameter) = evaluator.getParameterAtPoint(point)
"parameter, u=" + str(parameter.x) +
"; v=" + str(parameter.y))
if not returnValue:
# could not get the parameter for it so
# it's probably not on the face
ui.messageBox("Point not on face\n(Could not get parameter)")
(returnValue, projectedPoint) = evaluator.getPointAtParameter(parameter)
"projectedPoint, x=" + str(projectedPoint.x) +
"; y=" + str(projectedPoint.y) +
"; z=" + str(projectedPoint.z))
if not projectedPoint.isEqualTo(point):
# the point has been projected in order to get
# a parameter so it's not on the face
"Point not on face\n(Point was projected in order to get parameter)")
returnValue = evaluator.isParameterOnFace(parameter)
if not returnValue:
ui.messageBox("Point not on face\n(isParameterOnFace says so)")
ui.messageBox("Point on face")
You just have to select a SketchPoint and a BRepFace in the UI before running it:
The Forge Devcon is quickly approaching on June 15-16. However, the day before on June 14, there are some workshops that you’re invited to. That fact that there are some workshops is a bit hidden on the website so I wanted to point them out so that if you’re coming to conference you can schedule an extra day before the conference begins and attend. The best place to learn more about the workshops is on the registration site. The workshops are free but you need to register because there is limited room.
Here’s a quick overview of the topics that will be covered, and the schedule:
Model Derivative and Viewing API Workshop - 10am-4pm.
In this workshop, you will work through a tutorial of the basic steps you need to display a 2D or 3D design on your webpage. The basic tutorial will take approximately 2 hours, the remainder of the time will be spent expanding your knowledge of these APIs by either working on your own project or working on one of our preset challenges – with help from the instructors.
Fusion 360 Client API Workshop - 10am-4pm
Design Automation (a.k.a. AutoCAD I/O) API Workshop - 10am-4pm
In this workshop, you will learn the APIs and basic steps needed to create your first Design Automation application. The basic tutorial will take approximately 2 hours. The remainder of the time will be spent working on your own project– with help from the instructors.
In case you did not spot this on the Forge DevCon site, the day before the conference we are also organizing workshops that you're welcome to attend - including a Fusion 360 API workshop with Brian and a few of us :)
There is an article related to Appearance's and a sample script installed with Fusion 360 that shows how to access Appearances from the libraries and assign them to objects in the model: "ApplyAppearanceToSelection.py"
In some cases you may want to drill down into the properties of the Appearance's which are used in the model, to find e.g. the texture that is being used by a given Appearance. The following Python sample only shows how to access the name and value of the properties, but there are more than that available. If you look at the Property object in the online help file, then in the "Derived Classes" section you'll see that quite a few other objects are derived from that. If you wanted to access all the properties then you'd need to handle all of them and check all their properties: http://help.autodesk.com/view/NINVFUS/ENU/?guid=GUID-db167e70-665f-4101-ba3c-3bcc88000fc7
import adsk.core, adsk.fusion, adsk.cam, traceback
def exportProperties(properties, indent, outputFile):
for prop in properties:
if type(prop) is adsk.core.AppearanceTextureProperty:
outputFile.writelines(indent + prop.name + "\n")
exportProperties(prop.value.properties, indent + " ", outputFile)
outputFile.writelines(indent + " Couldn't get sub properties\n")
elif type(prop) is adsk.core.ColorProperty:
color = prop.value
"red = " + str(color.red) +
"; green = " + str(color.green) +
"; blue = " + str(color.blue) +"\n")
outputFile.writelines(indent + prop.name + " = " + str(prop.value) + "\n")
ui = None
app = adsk.core.Application.get()
ui = app.userInterface
fileDialog = ui.createFileDialog()
fileDialog.isMultiSelectEnabled = False
fileDialog.title = "Get the file to save to"
fileDialog.filter = 'Text files (*.txt)'
fileDialog.filterIndex = 0
dialogResult = fileDialog.showSave()
if dialogResult == adsk.core.DialogResults.DialogOK:
fileName = fileDialog.filename
design = app.activeProduct
with open(fileName, 'w') as outputFile:
for appearance in design.appearances:
outputFile.writelines(">>>>> " + appearance.name + " <<<<<\n")
exportProperties(appearance.appearanceProperties, " ", outputFile)
You will get something like this:
>>>>> Oak - Semigloss <<<<<
Material Type = 0
Emission = False
Reflectance = 0.06027025
Emissive Luminance = 0.0
red = 255; green = 255; blue = 255
Depth = 0.5
red = 255; green = 255; blue = 255
Translucency = False
red = 255; green = 255; blue = 255
Anisotropy = 0.0
Couldn't get sub properties
NDF = surface_ndf_ggx
Source = /Users/adamnagy/Library/Containers/com.autodesk.mas.fusion360/
Data/Library/Application Support/Autodesk/Common/Material Library/16021701/
Amount = 0.003
Amount = 1.0
Bump Type = bumpmap_height_map
Sharing = common_shared
red = 80; green = 80; blue = 80
Tint = False
Link texture transforms = False
Map Channel = 1
Map Channel = 1
UVW Source = 0
Offset Lock = False
Offset = 0.0
Offset Y = 0.0
Sample Size = 18.0
Size Y = 36.0
Scale Lock = True
U Offset = 0.0
Horizontal = True
U Scale = 1.0
UV Scale = 1.0
V Offset = 0.0
Vertical = True
V Scale = 1.0
Rotation = 0.0
Rotation = 0.0
Roughness = 0.2
There was a discussion on the Fusion forum about being able to create a construction point that’s positioned at a specific X, Y, Z location. There is also a post on the IdeaStation. Fusion doesn’t currently support that capability but only supports the creation of construction points relative to existing geometry. When creating construction geometry, there are two situations to consider; capturing design history (parametric modeling) and not capturing design history (direct edit modeling). You switch between the two modeling types using the command in the context menu on the root component node in the browser, as shown below.
Direct Edit In a direct edit design, construction points don’t remember their relationship to the original geometry used when creating them and they can be moved to any location after they’ve been created using the Move command. Using the Move command it is possible to end up with a construction point at the desired X, Y, Z location but it’s an inconvenient workflow to first create the point and then move it.
Parametric Model When working in a parametric model, all construction points are dependent on other geometry and can’t exist anywhere in space so it’s not possible to position a point at an arbitrary X, Y, Z location. However, parametric designs support something called a “Base Feature”. A base feature lets you create an “island” of direct edit data within your parametric design, so it ends up being the same workflow as in a direct modeling design, except that you also need to first create a base feature, as shown below.
So, it is possible to create a construction point at a specific X, Y, Z location in both parametric and direct modeling designs, but it’s inconvenient in both. With the recent May 7 Fusion update, the API now supports the creation and edit of base features. With this capability it’s now possible to write an add-in that will support the creation of a construction point at a specified X, Y, Z location.
Once the add-in is installed, you’ll now see a new “Point at Coordinate” command in the CONSTRUCT menu, as shown below.
When you run the command when you’re in a direct modeling design the dialog on the left is displayed and when you’re in a parametric design, the dialog on the right is displayed.
Using the “Name” field you can specify what the name of the new construction point will be. The default is “XYZ Point” but sometimes, depending on how you’re going to use the point, it can be useful to name it so you can easily identify which one is which when looking at the browser.
The “X Position”, “Y Position”, and “Z Position” fields are the X, Y, and Z values where the construction point will be created. You can use any valid expression when specifying these values, including the use of parameter names. For example, if you have a parameter named Length you can use “Length/2”. The default units are whatever the current design units are. But you can also override that by specifying the units. For example “3 in” will be interpreted as 3 inches. Whatever you use as input, it’s important to understand the point doesn’t remember this relationship so that if you use a parameter name and then later change the parameter value, the point will not move.
When the command is run in a parametric design, the “Base feature” dropdown at the top of the dialog is displayed. Because points at an arbitrary location must be created within a base feature when working in a parametric design, this lets you specify which base feature they’ll be created in. If there isn’t an existing base feature, a new one will be created and that will be the default base feature when you run the command again, although you have the option of choosing any existing base feature.
To use the add-in just unpack the zip file anywhere on your computer. Run the “Scripts and Add-Ins” command and click the green “+”, as shown below. Browse to the location where you unpacked the zip file and choose the PointAtCoord.py file. You’ll only need to do this once because Fusion will remember it for subsequent sessions.
You can either shut down and restart Fusion to make the command available, or while in the same session you can start the add-in. This is only needed when first enabling the add-in because it will load automatically in subsequent sessions of Fusion. To start the add-in, choose the “Add-Ins” tab on the “Scripts and Add-Ins” dialog, select the “PointAtCoord” add-in from the list, and click the “Run” button.
Now, and in all sessions of Fusion the “Point at Coordinate” command will be available.
You can find the source code for this add-in and a few others here on GitHub. Please let me know if you find any problems or have suggestions on improvements.
This past Saturday a new Fusion update went out. There is some API functionality in that update that enables a lot more types of programs. These new capabilities are attributes and base features.
The ability to associate data with any entity is now supported through attributes. Every object that supports attributes now has an 'attributes' property that returns an Attributes object. You can read more about this and why you might want to use it in the User Manual topic for attributes.
There is also now support to create base features and to create entities within a base feature. For example, you can use the new Sketches.addToBaseOrFormFeature method to create a new sketch in an existing base feature, or the MeshBody.add method now supports an argument to let you specify a base feature to create the mesh body within. Base features provide a way to create direct-modeling geometry within a parametric model. It’s like an “island” of direct modeling. This is useful in a few cases where certain types of creation are only supported in direct modeling mode but instead of converting the entire design and losing all of the modeling intelligence, you can create a base feature and create the direct modeling geometry within the base feature.
One more thing that’s not new API functionality is that the User Manual topic for Commands has been re-written and better describes the full capability of creating custom commands. I know that the documentation for creating custom commands wasn’t the best and people were struggling with commands. Hopefully this will help a lot of you.
Actually, it’s time for proposals for classes. If you have some cool tips and techniques or an interesting project to share, you should think about submitting a proposal and maybe you can be a presenter as AU this year. As a presenter you get the satisfaction of sharing something that’s important to you with your peers. But you also get an AU event Pass, (conference only; does not include travel or hotel accommodations). And you will also receive a $400 cash honorarium for each additional session they lead. You can learn more at the ”Call for Proposals” AU site.
Whether or not you feel like you’re up to the challenge of presenting, I would love to hear suggestions for topics on classes you would like to see related to both the Inventor and Fusion API’s. Remember, even if you’re unable to attend AU in person, you’re still able to benefit from the content from previous sessions.
I had posted a few weeks about some upcoming meetups discussing the Fusion API. To generate some interest for the meetups I created a small animation of some butter melting and forming a puddle in a butter dish. Here’s the original animation. At the meetups I presented an introduction to Fusion’s API and shared how the animation was created. The meetup in Seattle was hosted by Microsoft and Jeremy Foster, from Microsoft, was kind enough to record it and post it on Microsoft’s Channel 9.
If you’re wondering how I created the animation and are imagining something elaborate you’re going to be a bit disappointed. The good news is that it’s probably simpler to do than you’re expecting. First, there are not any elaborate volume calculations to make sure the volume of the puddle matches the volume that has melted from the cube. I just tried to do what looked good. The melting of the cube and the formation of the puddle is done using parameters so the majority of the work is building the model, not writing the program. Although I’m using Fusion here, this is all possible with Inventor using the same concepts. What the program is doing is actually very simple; there’s a loop where in each iteration it changes the value of one or more parameters, updates the model, and captures the screen as an image. That’s it. The challenge is building the model so you get the behavior you want by editing parameters.
Let’s look in more detail how the butter model works. There are three components that make up the model; the butter dish, the block of butter, and the puddle. Below, three sketches are shown that are the key to making it all work. The sketch at the bottom defines the shape of the puddle. The other two sketches are used to create a loft feature which is subtracted from the block of butter.
Below is a detailed look at one of the sketches used to define the loft that cuts away the block of butter. The sketch consists of three lines and a spline. The vertical position of the points on the spline is being controlled through the use of dimension constraints. When you place a dimension in a sketch, a parameter is automatically created that controls the value of the dimension. By editing the parameters the points on the spline will move up and down. The program modifies the parameter values so that the points slowly move down, causing the block of butter to disappear.
The puddle uses the same principal but is slightly more complicated because the points don’t just move down but move in two directions. Below is the puddle sketch. To move the points in two directions there are two dimensions to each spline point, one controlling the horizontal (x) direction and another controlling the vertical (y) direction. To make the puddle grow, the parameters are edited to slowly move the points “out” away from the cube. I didn’t worry about how the puddle intersects the butter dish. As it grows it will end up extending into the dish, but it looks ok because you can’t see the overlap.
Below is a snapshot of most of the parameters that are used to drive the model. Sketch7 is the puddle sketch and Sketch8 is one of the loft sketches. I named each of the parameters a name that made sense to me so I knew which parameter to edit to get the corresponding point to move in the way I wanted.
Below is a sample program that demonstrates the full basic workflow where it changes a parameter through a range of values and captures and image at each change. It has a loop where it continually changes the parameter named “ToChange”. In the example below, the loop goes until the parameter reaches a specified value but it could also go a pre-defined number of steps. It depends on what you need in your specific case.
The code is really fairly simple. It's all a matter of how creative you get with your parametric model. You can use anything to determine the parameter value and you don't need to be limited to changing a single value. In the melting butter example, I change several values and use a random number so the butter melted and the puddle formed in a slightly random way.
The result is a directory containing a bunch of image files. There are many products that will let you combine the images into a video. For the gif file at the top of this post, I used GIMP (Gnu Image Manipulation Program). For a higher resolution, full color animation I used another free tool called FFmpeg where I used the command line below to create the video:
Here is the Fusion model and the Python program for the melting butter example. When running the code it begins by displaying a message box to allow you to reset the parameter values back to the original starting values and then an option to run through the parameter changes and generate the images. If you look at the code you’ll see that it does some work to reposition the history marker in the timeline to reduce the re-compute time because Fusion re-computes with every parameter change without an option to delay the re-compute. It moves the marker to just before the features that will re-compute, changes the parameters and then moves it to the end of the timeline so there is a single re-compute of the model with each set of parameter changes.
You might have heard by now of the biggest ever Autodesk Developer Conference coming this June, Forge DevCon
The Early Bird tickets are running out this Friday, so you'll have to hurry up if you want to use them! :)
At the conference you can learn everything there is to know about our Forge Platform and how you can build on top of it.
Brian will be there to talk about Fusion API, and I will be there too explaining how you will be able to take advantage of our translation services to get data out of pretty much any design file you have.
Then the following week we'll also hold a Cloud Accelerator, a program that we have run many times by now all around the world with great success. Here we work side-by-side with 3rd party developers to jump start their development on our cloud technologies.
It's looking to be a great couple of weeks that nobody should miss :)
The first ever Autodesk Forge Developer Conference will take place in San Francisco on June 15-16. This is a great opportunity to get up to speed with what’s been going on with Fusion and to get a good look at what’s coming with the new Forge platform. We’ll be talking about a lot of upcoming functionality and will be able to demonstrate some of it. Complete information about the conference is available by clicking the picture below.
We’re still working on the agenda for the conference so check back often for updates. Besides the Forge conference itself, we’re also planning a Fusion API workshop on either the 13th or 14th, so if Fusion is your primary interest you won’t want to miss that. Keep watching for updates on the website as we firm up details.
Early bird registration is due to end on April 15th, so you don't have much time to buy your ticket at the low cost of $499. If you're a student, you can come for free – just sign up for a student ticket using an .edu email address.
The week following the Forge conference we’ll also be holding a Forge Accelerator in San Francisco. This is a chance for you to work on your product that uses Forge and get personal assistance from the experts at Autodesk. See the website for information about how to apply for the accelerator.
I hope to see you at any or all of the upcoming events.